id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LaTarn/re-location-setfit-model
|
LaTarn
|
text-classification
|
[
"sentence-transformers",
"safetensors",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-11-03T06:15:13Z |
2023-11-03T06:15:38+00:00
| 46 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# LaTarn/re-location-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("LaTarn/re-location-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# LaTarn/re-location-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("LaTarn/re-location-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,475 |
gchhablani/fnet-large-finetuned-wnli
|
gchhablani
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-09-23T05:39:44+00:00
| 15 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: fnet-large-finetuned-wnli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- type: accuracy
value: 0.38028169014084506
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-wnli
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6953
- Accuracy: 0.3803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7217 | 1.0 | 159 | 0.6864 | 0.5634 |
| 0.7056 | 2.0 | 318 | 0.6869 | 0.5634 |
| 0.706 | 3.0 | 477 | 0.6875 | 0.5634 |
| 0.7032 | 4.0 | 636 | 0.6931 | 0.5634 |
| 0.7025 | 5.0 | 795 | 0.6953 | 0.3803 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-wnli
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6953
- Accuracy: 0.3803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7217 | 1.0 | 159 | 0.6864 | 0.5634 |
| 0.7056 | 2.0 | 318 | 0.6869 | 0.5634 |
| 0.706 | 3.0 | 477 | 0.6875 | 0.5634 |
| 0.7032 | 4.0 | 636 | 0.6931 | 0.5634 |
| 0.7025 | 5.0 | 795 | 0.6953 | 0.3803 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "fnet-large-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.38028169014084506, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,476 |
TerminatorPower/nerT
|
TerminatorPower
|
token-classification
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"ner",
"token-classification",
"turkish",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-21T10:27:44Z |
2024-08-26T21:14:00+00:00
| 13 | 0 |
---
language:
- tr
library_name: transformers
license: mit
metrics:
- f1
- accuracy
- recall
tags:
- ner
- token-classification
- turkish
---
# Model Card for Turkish Named Entity Recognition Model
<!-- Provide a quick summary of what the model is/does. -->
This model performs Named Entity Recognition (NER) for Turkish text, identifying and classifying entities such as person names, locations, and organizations. Model got 0.9599 F1 on validation set.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is a fine-tuned BERT model for Turkish Named Entity Recognition (NER). It is based on the `dbmdz/bert-base-turkish-uncased` model and has been trained on a custom Turkish NER dataset.
- **Developed by:** Ezel Bayraktar ([email protected])
- **Model type:** Token Classification (Named Entity Recognition)
- **Language(s) (NLP):** Turkish
- **License:** MIT
- **Finetuned from model:** dbmdz/bert-base-turkish-uncased
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model can be used directly for Named Entity Recognition tasks in Turkish text. It identifies and labels entities such as person names (PER), locations (LOC), and organizations (ORG).
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
The model can be integrated into larger natural language processing pipelines for Turkish, such as information extraction systems, question answering, or text summarization.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
This model should not be used for languages other than Turkish or for tasks beyond Named Entity Recognition. It may not perform well on domain-specific text or newly emerging named entities not present in the training data.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model may inherit biases present in the training data or the pre-trained BERT model it was fine-tuned from. It may not perform consistently across different domains or types of Turkish text.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should evaluate the model's performance on their specific domain and use case. For critical applications, human review of the model's outputs is recommended.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline
nert = pipeline('ner', model='TerminatorPower/nerT', tokenizer='TerminatorPower/nerT')
answer = nert("Mustafa Kemal Atatürk, 19 Mayıs 1919'da Samsun'a çıktı.")
print(answer)
| null |
Non_BioNLP
|
# Model Card for Turkish Named Entity Recognition Model
<!-- Provide a quick summary of what the model is/does. -->
This model performs Named Entity Recognition (NER) for Turkish text, identifying and classifying entities such as person names, locations, and organizations. Model got 0.9599 F1 on validation set.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is a fine-tuned BERT model for Turkish Named Entity Recognition (NER). It is based on the `dbmdz/bert-base-turkish-uncased` model and has been trained on a custom Turkish NER dataset.
- **Developed by:** Ezel Bayraktar ([email protected])
- **Model type:** Token Classification (Named Entity Recognition)
- **Language(s) (NLP):** Turkish
- **License:** MIT
- **Finetuned from model:** dbmdz/bert-base-turkish-uncased
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model can be used directly for Named Entity Recognition tasks in Turkish text. It identifies and labels entities such as person names (PER), locations (LOC), and organizations (ORG).
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
The model can be integrated into larger natural language processing pipelines for Turkish, such as information extraction systems, question answering, or text summarization.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
This model should not be used for languages other than Turkish or for tasks beyond Named Entity Recognition. It may not perform well on domain-specific text or newly emerging named entities not present in the training data.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model may inherit biases present in the training data or the pre-trained BERT model it was fine-tuned from. It may not perform consistently across different domains or types of Turkish text.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should evaluate the model's performance on their specific domain and use case. For critical applications, human review of the model's outputs is recommended.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline
nert = pipeline('ner', model='TerminatorPower/nerT', tokenizer='TerminatorPower/nerT')
answer = nert("Mustafa Kemal Atatürk, 19 Mayıs 1919'da Samsun'a çıktı.")
print(answer)
|
{"language": ["tr"], "library_name": "transformers", "license": "mit", "metrics": ["f1", "accuracy", "recall"], "tags": ["ner", "token-classification", "turkish"]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 43,477 |
cataluna84/distilbert-base-uncased-distilled-clinc
|
cataluna84
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-09-02T15:29:19Z |
2022-09-02T15:45:28+00:00
| 14 | 0 |
---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- type: accuracy
value: 0.9490322580645161
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2926
- Accuracy: 0.9490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7039 | 1.0 | 318 | 2.7703 | 0.7519 |
| 2.1213 | 2.0 | 636 | 1.3972 | 0.8590 |
| 1.0629 | 3.0 | 954 | 0.7295 | 0.9174 |
| 0.5596 | 4.0 | 1272 | 0.4701 | 0.9339 |
| 0.3381 | 5.0 | 1590 | 0.3675 | 0.9445 |
| 0.2395 | 6.0 | 1908 | 0.3283 | 0.9432 |
| 0.1894 | 7.0 | 2226 | 0.3065 | 0.9471 |
| 0.1631 | 8.0 | 2544 | 0.2989 | 0.9474 |
| 0.1491 | 9.0 | 2862 | 0.2957 | 0.9471 |
| 0.1437 | 10.0 | 3180 | 0.2926 | 0.9490 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2926
- Accuracy: 0.9490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7039 | 1.0 | 318 | 2.7703 | 0.7519 |
| 2.1213 | 2.0 | 636 | 1.3972 | 0.8590 |
| 1.0629 | 3.0 | 954 | 0.7295 | 0.9174 |
| 0.5596 | 4.0 | 1272 | 0.4701 | 0.9339 |
| 0.3381 | 5.0 | 1590 | 0.3675 | 0.9445 |
| 0.2395 | 6.0 | 1908 | 0.3283 | 0.9432 |
| 0.1894 | 7.0 | 2226 | 0.3065 | 0.9471 |
| 0.1631 | 8.0 | 2544 | 0.2989 | 0.9474 |
| 0.1491 | 9.0 | 2862 | 0.2957 | 0.9471 |
| 0.1437 | 10.0 | 3180 | 0.2926 | 0.9490 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9490322580645161, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,478 |
asadcr/autotrain-intelligize-edgar-analysis-2-1722460190
|
asadcr
|
summarization
|
[
"transformers",
"pytorch",
"autotrain",
"summarization",
"en",
"dataset:asadcr/autotrain-data-intelligize-edgar-analysis-2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | 2022-10-11T12:42:48Z |
2022-10-11T12:43:56+00:00
| 18 | 0 |
---
datasets:
- asadcr/autotrain-data-intelligize-edgar-analysis-2
language:
- en
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 0.9669951284881569
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1722460190
- CO2 Emissions (in grams): 0.9670
## Validation Metrics
- Loss: 1.652
- Rouge1: 50.229
- Rouge2: 41.591
- RougeL: 50.229
- RougeLsum: 53.205
- Gen Len: 10.250
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/asadcr/autotrain-intelligize-edgar-analysis-2-1722460190
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1722460190
- CO2 Emissions (in grams): 0.9670
## Validation Metrics
- Loss: 1.652
- Rouge1: 50.229
- Rouge2: 41.591
- RougeL: 50.229
- RougeLsum: 53.205
- Gen Len: 10.250
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/asadcr/autotrain-intelligize-edgar-analysis-2-1722460190
```
|
{"datasets": ["asadcr/autotrain-data-intelligize-edgar-analysis-2"], "language": ["en"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 0.9669951284881569}}
|
task
|
[
"SUMMARIZATION"
] | 43,479 |
Sociovestix/lenu_NL
|
Sociovestix
|
text-classification
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-14T14:15:53Z |
2024-06-05T16:23:18+00:00
| 8 | 1 |
---
widget:
- text: Acht Acht B.V.
- text: Stichting Helison
- text: Hayman
- text: Melkveebedrijf Leferink
- text: Jointly
- text: Priority Shipping C.V.
- text: Hoofd en Hart Bewindvoering
- text: Dekker Kids Beheer N.V.
- text: PCI H. Bartholomeus
- text: AEGON Equity Europe Beta Pool (N)
- text: ScaleX Ventures Fund I Coöperatief U.A.
- text: Global Emerging Markets Equity Pool
- text: Gemeente Groningen
- text: Onderlinge Verzekeringmaatschappij "SOM" U.A.
- text: Vereniging van Appartementseigenaars "Henry Dunantpark III"
- text: Ferrovial Construction International SE
- text: Rederij BigLift Barentsz
model-index:
- name: Sociovestix/lenu_NL
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: lenu
type: Sociovestix/lenu
config: NL
split: test
revision: f4d57b8d77a49ec5c62d899c9a213d23cd9f9428
metrics:
- type: f1
value: 0.9814439261813019
name: f1
- type: f1
value: 0.7083817575469042
name: f1 macro
args:
average: macro
---
# LENU - Legal Entity Name Understanding for Netherlands
A Bert (multilingual uncased) model fine-tuned on dutch legal entity names (jurisdiction NL) from the Global [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei)
(LEI) System with the goal to detect [Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list).
---------------
<h1 align="center">
<a href="https://gleif.org">
<img src="http://sdglabs.ai/wp-content/uploads/2022/07/gleif-logo-new.png" width="220px" style="display: inherit">
</a>
</h1><br>
<h3 align="center">in collaboration with</h3>
<h1 align="center">
<a href="https://sociovestix.com">
<img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%">
</a>
</h1><br>
---------------
## Model Description
<!-- Provide a longer summary of what this model is. -->
The model has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and
[Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the ELF Code solely based on an entity's legal name and legal jurisdiction.
See also the open source python library [lenu](https://github.com/Sociovestix/lenu), which supports in this task.
The model has been trained on the dataset [lenu](https://huggingface.co/datasets/Sociovestix), with a focus on dutch legal entities and ELF Codes within the Jurisdiction "NL".
- **Developed by:** [GLEIF](https://gleif.org) and [Sociovestix Labs](https://huggingface.co/Sociovestix)
- **License:** Creative Commons (CC0) license
- **Finetuned from model [optional]:** bert-base-multilingual-uncased
- **Resources for more information:** [Press Release](https://www.gleif.org/en/newsroom/press-releases/machine-learning-new-open-source-tool-developed-by-gleif-and-sociovestix-labs-enables-organizations-everywhere-to-automatically-)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
An entity's legal form is a crucial component when verifying and screening organizational identity.
The wide variety of entity legal forms that exist within and between jurisdictions, however, has made it difficult for large organizations to capture legal form as structured data.
The Jurisdiction specific models of [lenu](https://github.com/Sociovestix/lenu), trained on entities from
GLEIF’s Legal Entity Identifier (LEI) database of over two million records, will allow banks,
investment firms, corporations, governments, and other large organizations to retrospectively analyze
their master data, extract the legal form from the unstructured text of the legal name and
uniformly apply an ELF code to each entity type, according to the ISO 20275 standard.
# Licensing Information
This model, which is trained on LEI data, is available under Creative Commons (CC0) license.
See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data).
# Recommendations
Users should always consider the score of the suggested ELF Codes. For low score values it may be necessary to manually review the affected entities.
| null |
Non_BioNLP
|
# LENU - Legal Entity Name Understanding for Netherlands
A Bert (multilingual uncased) model fine-tuned on dutch legal entity names (jurisdiction NL) from the Global [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei)
(LEI) System with the goal to detect [Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list).
---------------
<h1 align="center">
<a href="https://gleif.org">
<img src="http://sdglabs.ai/wp-content/uploads/2022/07/gleif-logo-new.png" width="220px" style="display: inherit">
</a>
</h1><br>
<h3 align="center">in collaboration with</h3>
<h1 align="center">
<a href="https://sociovestix.com">
<img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%">
</a>
</h1><br>
---------------
## Model Description
<!-- Provide a longer summary of what this model is. -->
The model has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and
[Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the ELF Code solely based on an entity's legal name and legal jurisdiction.
See also the open source python library [lenu](https://github.com/Sociovestix/lenu), which supports in this task.
The model has been trained on the dataset [lenu](https://huggingface.co/datasets/Sociovestix), with a focus on dutch legal entities and ELF Codes within the Jurisdiction "NL".
- **Developed by:** [GLEIF](https://gleif.org) and [Sociovestix Labs](https://huggingface.co/Sociovestix)
- **License:** Creative Commons (CC0) license
- **Finetuned from model [optional]:** bert-base-multilingual-uncased
- **Resources for more information:** [Press Release](https://www.gleif.org/en/newsroom/press-releases/machine-learning-new-open-source-tool-developed-by-gleif-and-sociovestix-labs-enables-organizations-everywhere-to-automatically-)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
An entity's legal form is a crucial component when verifying and screening organizational identity.
The wide variety of entity legal forms that exist within and between jurisdictions, however, has made it difficult for large organizations to capture legal form as structured data.
The Jurisdiction specific models of [lenu](https://github.com/Sociovestix/lenu), trained on entities from
GLEIF’s Legal Entity Identifier (LEI) database of over two million records, will allow banks,
investment firms, corporations, governments, and other large organizations to retrospectively analyze
their master data, extract the legal form from the unstructured text of the legal name and
uniformly apply an ELF code to each entity type, according to the ISO 20275 standard.
# Licensing Information
This model, which is trained on LEI data, is available under Creative Commons (CC0) license.
See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data).
# Recommendations
Users should always consider the score of the suggested ELF Codes. For low score values it may be necessary to manually review the affected entities.
|
{"widget": [{"text": "Acht Acht B.V."}, {"text": "Stichting Helison"}, {"text": "Hayman"}, {"text": "Melkveebedrijf Leferink"}, {"text": "Jointly"}, {"text": "Priority Shipping C.V."}, {"text": "Hoofd en Hart Bewindvoering"}, {"text": "Dekker Kids Beheer N.V."}, {"text": "PCI H. Bartholomeus"}, {"text": "AEGON Equity Europe Beta Pool (N)"}, {"text": "ScaleX Ventures Fund I Coöperatief U.A."}, {"text": "Global Emerging Markets Equity Pool"}, {"text": "Gemeente Groningen"}, {"text": "Onderlinge Verzekeringmaatschappij \"SOM\" U.A."}, {"text": "Vereniging van Appartementseigenaars \"Henry Dunantpark III\""}, {"text": "Ferrovial Construction International SE"}, {"text": "Rederij BigLift Barentsz"}], "model-index": [{"name": "Sociovestix/lenu_NL", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "lenu", "type": "Sociovestix/lenu", "config": "NL", "split": "test", "revision": "f4d57b8d77a49ec5c62d899c9a213d23cd9f9428"}, "metrics": [{"type": "f1", "value": 0.9814439261813019, "name": "f1"}, {"type": "f1", "value": 0.7083817575469042, "name": "f1 macro", "args": {"average": "macro"}}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,481 |
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-76679499
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"en",
"dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-76679499",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-28T23:44:16Z |
2024-05-28T23:44:30+00:00
| 7 | 0 |
---
datasets:
- fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-76679499
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-76679499',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| null |
Non_BioNLP
|
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-76679499',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
{"datasets": ["fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-76679499", "allenai/c4"], "language": ["en", "en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,482 |
Helsinki-NLP/opus-mt-ru-da
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T12:03:20+00:00
| 42 | 0 |
---
language:
- ru
- da
license: apache-2.0
tags:
- translation
---
### rus-dan
* source group: Russian
* target group: Danish
* OPUS readme: [rus-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.dan | 56.6 | 0.714 |
### System Info:
- hf_name: rus-dan
- source_languages: rus
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'da']
- src_constituents: {'rus'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: dan
- short_pair: ru-da
- chrF2_score: 0.7140000000000001
- bleu: 56.6
- brevity_penalty: 0.977
- ref_len: 11746.0
- src_name: Russian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: da
- prefer_old: False
- long_pair: rus-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| null |
Non_BioNLP
|
### rus-dan
* source group: Russian
* target group: Danish
* OPUS readme: [rus-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.dan | 56.6 | 0.714 |
### System Info:
- hf_name: rus-dan
- source_languages: rus
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'da']
- src_constituents: {'rus'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: dan
- short_pair: ru-da
- chrF2_score: 0.7140000000000001
- bleu: 56.6
- brevity_penalty: 0.977
- ref_len: 11746.0
- src_name: Russian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: da
- prefer_old: False
- long_pair: rus-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
{"language": ["ru", "da"], "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 43,483 |
westenfelder/Llama-3.2-1B-Instruct-NL2SH
|
westenfelder
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2502.06858",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-05T03:56:40Z |
2025-04-06T20:28:52+00:00
| 17 | 0 |
---
library_name: transformers
license: mit
pipeline_tag: text-generation
---
This repository contains the model described in [LLM-Supported Natural Language to Bash Translation](https://arxiv.org/abs/2502.06858).
Code: https://github.com/westenfelder/NL2SH
| null |
Non_BioNLP
|
# Model Card for Llama-3.2-1B-Instruct-NL2SH
This model translates natural language (English) instructions to Bash commands.
## Model Details
### Model Description
This model is a fine-tuned version of the [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) model trained on the [NL2SH-ALFA](https://huggingface.co/datasets/westenfelder/NL2SH-ALFA) dataset for the task of natural language to Bash translation (NL2SH). For more information, please refer to the [paper](https://arxiv.org/abs/2502.06858).
- **Developed by:** [Anyscale Learning For All (ALFA) Group at MIT-CSAIL](https://alfagroup.csail.mit.edu/)
- **Language:** English
- **License:** MIT License
- **Finetuned from model:** [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
### Model Sources
- **Repository:** [GitHub Repo](https://github.com/westenfelder/NL2SH)
- **Paper:** [LLM-Supported Natural Language to Bash Translation](https://arxiv.org/abs/2502.06858)
## Uses
### Direct Use
This model is intended for research on machine translation. The model can also be used as an educational resource for learning Bash.
### Out-of-Scope Use
This model should not be used in production or automated systems without human verification.
**Considerations for use in high-risk environments:** This model should not be used in high-risk environments due to its low accuracy and potential for generating harmful commands.
## Bias, Risks, and Limitations
This model has a tendency to generate overly complex and incorrect Bash commands. It may produce harmful commands that delete data or corrupt a system. This model is not intended for natural languages other than English, scripting languages or than Bash, or multi-line Bash scripts.
### Recommendations
Users are encouraged to use this model as Bash reference tool and should not execute commands without verification.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
def translate(prompt):
model_name = "westenfelder/Llama-3.2-1B-Instruct-NL2SH"
tokenizer = AutoTokenizer.from_pretrained(model_name, clean_up_tokenization_spaces=False)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="cuda", torch_dtype=torch.bfloat16)
messages = [
{"role": "system", "content": "Your task is to translate a natural language instruction to a Bash command. You will receive an instruction in English and output a Bash command that can be run in a Linux terminal."},
{"role": "user", "content": f"{prompt}"},
]
tokens = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_tensors="pt"
).to(model.device)
attention_mask = torch.ones_like(tokens)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
tokens,
attention_mask=attention_mask,
max_new_tokens=100,
eos_token_id=terminators,
pad_token_id=tokenizer.eos_token_id,
do_sample=False,
temperature=None,
top_p=None,
top_k=None,
)
response = outputs[0][tokens.shape[-1]:]
return tokenizer.decode(response, skip_special_tokens=True)
nl = "List files in the /workspace directory that were accessed over an hour ago."
sh = translate(nl)
print(sh)
```
## Training Details
### Training Data
This model was trained on the [NL2SH-ALFA](https://huggingface.co/datasets/westenfelder/NL2SH-ALFA) dataset.
### Training Procedure
Please refer to section 4.1 and 4.3.4 of the [paper](https://arxiv.org/abs/2502.06858) for information about data pre-processing, training hyper-parameters and hardware.
## Evaluation
This model was evaluated on the [NL2SH-ALFA](https://huggingface.co/datasets/westenfelder/NL2SH-ALFA) test set using the [InterCode-ALFA](https://github.com/westenfelder/InterCode-ALFA) benchmark.
### Results
This model achieved an accuracy of **0.37** on the InterCode-ALFA benchmark.
## Environmental Impact
Experiments were conducted using a private infrastructure, which has a approximate carbon efficiency of 0.432 kgCO2eq/kWh. A cumulative of 12 hours of computation was performed on hardware of type RTX A6000 (TDP of 300W). Total emissions are estimated to be 1.56 kgCO2eq of which 0 percents were directly offset. Estimations were conducted using the [Machine Learning Emissions Calculator](https://mlco2.github.io/impact#compute).
## Citation
**BibTeX:**
```
@misc{westenfelder2025llmsupportednaturallanguagebash,
title={LLM-Supported Natural Language to Bash Translation},
author={Finnian Westenfelder and Erik Hemberg and Miguel Tulla and Stephen Moskal and Una-May O'Reilly and Silviu Chiricescu},
year={2025},
eprint={2502.06858},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.06858},
}
```
## Model Card Authors
Finn Westenfelder
## Model Card Contact
Please email [email protected] or make a pull request.
|
{"base_model": "meta-llama/Llama-3.2-1B-Instruct", "datasets": ["westenfelder/NL2SH-ALFA"], "language": ["en"], "library_name": "transformers", "license": "mit", "pipeline_tag": "translation", "model-index": [{"name": "Llama-3.2-1B-Instruct-NL2SH", "results": [{"task": {"type": "translation", "name": "Natural Language to Bash Translation"}, "dataset": {"name": "NL2SH-ALFA", "type": "translation", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.37, "name": "InterCode-ALFA"}], "source": {"url": "https://arxiv.org/abs/2502.06858", "name": "InterCode-ALFA"}}]}]}
|
task
|
[
"TRANSLATION"
] | 43,484 |
yoninazarathy/distilbert-base-uncased-finetuned-cola
|
yoninazarathy
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-13T11:45:21Z |
2023-06-13T11:52:00+00:00
| 8 | 0 |
---
datasets:
- glue
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- type: matthews_correlation
value: 0.5363967157085073
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8120
- Matthews Correlation: 0.5364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5222 | 0.4210 |
| 0.3466 | 2.0 | 1070 | 0.5042 | 0.4832 |
| 0.2335 | 3.0 | 1605 | 0.5640 | 0.5173 |
| 0.1812 | 4.0 | 2140 | 0.7634 | 0.5200 |
| 0.1334 | 5.0 | 2675 | 0.8120 | 0.5364 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8120
- Matthews Correlation: 0.5364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5222 | 0.4210 |
| 0.3466 | 2.0 | 1070 | 0.5042 | 0.4832 |
| 0.2335 | 3.0 | 1605 | 0.5640 | 0.5173 |
| 0.1812 | 4.0 | 2140 | 0.7634 | 0.5200 |
| 0.1334 | 5.0 | 2675 | 0.8120 | 0.5364 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5363967157085073, "name": "Matthews Correlation"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,485 |
jncraton/gemma-1.1-2b-it-ct2-int8
|
jncraton
| null |
[
"transformers",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"license:gemma",
"endpoints_compatible",
"region:us"
] | 2024-04-27T11:33:07Z |
2024-04-27T11:37:38+00:00
| 4 | 0 |
---
library_name: transformers
license: gemma
widget:
- messages:
- role: user
content: How does the brain work?
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the latest 2B instruct version of the Gemma model. Here you can find other models in the Gemma family:
| | Base | Instruct |
|----|----------------------------------------------------|----------------------------------------------------------------------|
| 2B | [gemma-2b](https://huggingface.co/google/gemma-2b) | [**gemma-1.1-2b-it**](https://huggingface.co/google/gemma-1.1-2b-it) |
| 7B | [gemma-7b](https://huggingface.co/google/gemma-7b) | [gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it) |
**Release Notes**
This is Gemma 1.1 2B (IT), an update over the original instruction-tuned Gemma release.
Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with `"Sure,"`.
We believe this release represents an improvement for most use cases, but we encourage users to test in their particular applications. The previous model [will continue to be available in the same repo](https://huggingface.co/google/gemma-2b-it). We appreciate the enthusiastic adoption of Gemma, and we continue to welcome all feedback from the community.
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
#### Running the model in JAX / Flax
Use the `flax` branch of the repository:
```python
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxGemmaForCausalLM
model_id = "google/gemma-1.1-2b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.padding_side = "left"
model, params = FlaxGemmaForCausalLM.from_pretrained(
model_id,
dtype=jnp.bfloat16,
revision="flax",
_do_init=False,
)
inputs = tokenizer("Valencia and Málaga are", return_tensors="np", padding=True)
output = model.generate(**inputs, params=params, max_new_tokens=20, do_sample=False)
output_text = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)
```
[Check this notebook](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/jax_gemma.ipynb) for a comprehensive walkthrough on how to parallelize JAX inference.
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-1.1-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Fine-tuning
You can find some fine-tuning scripts under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt them to this model, simply change the model-id to `google/gemma-1.1-2b-it`.
We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on the English quotes dataset
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
The pre-trained base models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **45.0** | **56.9** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 1.0
| Benchmark | Metric | Gemma 1.0 IT 2B | Gemma 1.0 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 6.86 | 7.90 |
| [BOLD][bold] | | 45.57 | 49.08 |
| [CrowS-Pairs][crows] | top-1 | 45.82 | 51.33 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig][bbq] | top-1 | 54.62 | 71.99 |
| [Winogender][winogender] | top-1 | 51.25 | 54.17 |
| [TruthfulQA][truthfulqa] | | 44.84 | 31.81 |
| [Winobias 1_2][winobias] | | 56.12 | 59.09 |
| [Winobias 2_2][winobias] | | 91.10 | 92.23 |
| [Toxigen][toxigen] | | 29.77 | 39.59 |
| ------------------------ | ------------- | --------------- | --------------- |
#### Gemma 1.1
| Benchmark | Metric | Gemma 1.1 IT 2B | Gemma 1.1 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 7.03 | 8.04 |
| [BOLD][bold] | | 47.76 | |
| [CrowS-Pairs][crows] | top-1 | 45.89 | 49.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 58.97 | 86.06 |
| [BBQ Disambig][bbq] | top-1 | 53.90 | 85.08 |
| [Winogender][winogender] | top-1 | 50.14 | 57.64 |
| [TruthfulQA][truthfulqa] | | 44.24 | 45.34 |
| [Winobias 1_2][winobias] | | 55.93 | 59.22 |
| [Winobias 2_2][winobias] | | 89.46 | 89.2 |
| [Toxigen][toxigen] | | 29.64 | 38.75 |
| ------------------------ | ------------- | --------------- | --------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
| null |
Non_BioNLP
|
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the latest 2B instruct version of the Gemma model. Here you can find other models in the Gemma family:
| | Base | Instruct |
|----|----------------------------------------------------|----------------------------------------------------------------------|
| 2B | [gemma-2b](https://huggingface.co/google/gemma-2b) | [**gemma-1.1-2b-it**](https://huggingface.co/google/gemma-1.1-2b-it) |
| 7B | [gemma-7b](https://huggingface.co/google/gemma-7b) | [gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it) |
**Release Notes**
This is Gemma 1.1 2B (IT), an update over the original instruction-tuned Gemma release.
Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with `"Sure,"`.
We believe this release represents an improvement for most use cases, but we encourage users to test in their particular applications. The previous model [will continue to be available in the same repo](https://huggingface.co/google/gemma-2b-it). We appreciate the enthusiastic adoption of Gemma, and we continue to welcome all feedback from the community.
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
#### Running the model in JAX / Flax
Use the `flax` branch of the repository:
```python
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxGemmaForCausalLM
model_id = "google/gemma-1.1-2b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.padding_side = "left"
model, params = FlaxGemmaForCausalLM.from_pretrained(
model_id,
dtype=jnp.bfloat16,
revision="flax",
_do_init=False,
)
inputs = tokenizer("Valencia and Málaga are", return_tensors="np", padding=True)
output = model.generate(**inputs, params=params, max_new_tokens=20, do_sample=False)
output_text = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)
```
[Check this notebook](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/jax_gemma.ipynb) for a comprehensive walkthrough on how to parallelize JAX inference.
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-1.1-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Fine-tuning
You can find some fine-tuning scripts under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt them to this model, simply change the model-id to `google/gemma-1.1-2b-it`.
We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on the English quotes dataset
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
The pre-trained base models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **45.0** | **56.9** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 1.0
| Benchmark | Metric | Gemma 1.0 IT 2B | Gemma 1.0 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 6.86 | 7.90 |
| [BOLD][bold] | | 45.57 | 49.08 |
| [CrowS-Pairs][crows] | top-1 | 45.82 | 51.33 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig][bbq] | top-1 | 54.62 | 71.99 |
| [Winogender][winogender] | top-1 | 51.25 | 54.17 |
| [TruthfulQA][truthfulqa] | | 44.84 | 31.81 |
| [Winobias 1_2][winobias] | | 56.12 | 59.09 |
| [Winobias 2_2][winobias] | | 91.10 | 92.23 |
| [Toxigen][toxigen] | | 29.77 | 39.59 |
| ------------------------ | ------------- | --------------- | --------------- |
#### Gemma 1.1
| Benchmark | Metric | Gemma 1.1 IT 2B | Gemma 1.1 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 7.03 | 8.04 |
| [BOLD][bold] | | 47.76 | |
| [CrowS-Pairs][crows] | top-1 | 45.89 | 49.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 58.97 | 86.06 |
| [BBQ Disambig][bbq] | top-1 | 53.90 | 85.08 |
| [Winogender][winogender] | top-1 | 50.14 | 57.64 |
| [TruthfulQA][truthfulqa] | | 44.24 | 45.34 |
| [Winobias 1_2][winobias] | | 55.93 | 59.22 |
| [Winobias 2_2][winobias] | | 89.46 | 89.2 |
| [Toxigen][toxigen] | | 29.64 | 38.75 |
| ------------------------ | ------------- | --------------- | --------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
{"library_name": "transformers", "license": "gemma", "widget": [{"messages": [{"role": "user", "content": "How does the brain work?"}]}], "inference": {"parameters": {"max_new_tokens": 200}}, "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 43,486 |
funnel-transformer/large-base
|
funnel-transformer
|
feature-extraction
|
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2024-02-29T09:00:16+00:00
| 26 | 1 |
---
datasets:
- bookcorpus
- wikipedia
- gigaword
language: en
license: apache-2.0
---
# Funnel Transformer large model (B8-8-8 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `large` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
| null |
Non_BioNLP
|
# Funnel Transformer large model (B8-8-8 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `large` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"datasets": ["bookcorpus", "wikipedia", "gigaword"], "language": "en", "license": "apache-2.0"}
|
task
|
[
"QUESTION_ANSWERING"
] | 43,487 |
vidhi0206/setfit-paraphrase-mpnet-amazoncf
|
vidhi0206
|
text-classification
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | 2024-02-24T12:21:10Z |
2024-02-24T19:36:15+00:00
| 4 | 0 |
---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: i honestly thought impossible at this point i feel pretty
- text: i feel convinced that im going to shy away from whatever is really good for
me
- text: i feel guilt that i should be more caring and im not
- text: i found myself feeling nostalgic as i thought about the temporarily abandoned
little bishop chronicles
- text: i am feeling very indecisive and spontaneous
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.439
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 4 | <ul><li>'i feel kind of strange'</li><li>'i am feeling pretty restless right now while typing this'</li><li>'i feel pressured when people say im going t beat you or whatever'</li></ul> |
| 3 | <ul><li>'i feel cranky and annoyed when i dont'</li><li>'i feel i did some thing impolite katanya'</li><li>'i feel like i should be offended but yawwwn'</li></ul> |
| 5 | <ul><li>'i was feeling an act of god at work in my life and it was an amazing feeling'</li><li>'i tween sat for my moms boss year old and year old boys this weekend id say babysit but that feels weird considering there were n'</li><li>'i started feeling funny and then friday i woke up sick as a dog'</li></ul> |
| 0 | <ul><li>'i am from new jersey and this first drink was consumed at a post prom party so i feel it s appropriately lame'</li><li>'i feel inside cause life is like a game sometimes then you came around me the walls just disappeared nothing to surround me and keep me from my fears im unprotected see how ive opened up oh youve made me trust cause ive never felt like this before im naked around you does it show'</li><li>'i cant believe with that statement being said that im already feeling sexually deprived'</li></ul> |
| 2 | <ul><li>'i suddenly feel that this is more than a sweet love song that every girls could sing in front of their boyfriends'</li><li>'i really wish i had the courage to drag a blade across my skin i wish i could do it i wish i could see the blood and feel that sweet release as it starts to pour out of my flesh and down my body'</li><li>'im sure they feel the more caring loving people in the kids lives the better'</li></ul> |
| 1 | <ul><li>'i am not feeling particularly creative'</li><li>'id probably go with none on and hope that my date admires a confident girl who feels fine without makeup'</li><li>'i woke on saturday feeling a little brighter and was very keen to get outdoors after spending all day friday wallowing in self pity'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.439 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-amazoncf")
# Run inference
preds = model("i am feeling very indecisive and spontaneous")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 17.6458 | 55 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
| 3 | 8 |
| 4 | 8 |
| 5 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.2972 | - |
| 0.2083 | 50 | 0.1452 | - |
| 0.4167 | 100 | 0.0452 | - |
| 0.625 | 150 | 0.0085 | - |
| 0.8333 | 200 | 0.0011 | - |
### Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 4 | <ul><li>'i feel kind of strange'</li><li>'i am feeling pretty restless right now while typing this'</li><li>'i feel pressured when people say im going t beat you or whatever'</li></ul> |
| 3 | <ul><li>'i feel cranky and annoyed when i dont'</li><li>'i feel i did some thing impolite katanya'</li><li>'i feel like i should be offended but yawwwn'</li></ul> |
| 5 | <ul><li>'i was feeling an act of god at work in my life and it was an amazing feeling'</li><li>'i tween sat for my moms boss year old and year old boys this weekend id say babysit but that feels weird considering there were n'</li><li>'i started feeling funny and then friday i woke up sick as a dog'</li></ul> |
| 0 | <ul><li>'i am from new jersey and this first drink was consumed at a post prom party so i feel it s appropriately lame'</li><li>'i feel inside cause life is like a game sometimes then you came around me the walls just disappeared nothing to surround me and keep me from my fears im unprotected see how ive opened up oh youve made me trust cause ive never felt like this before im naked around you does it show'</li><li>'i cant believe with that statement being said that im already feeling sexually deprived'</li></ul> |
| 2 | <ul><li>'i suddenly feel that this is more than a sweet love song that every girls could sing in front of their boyfriends'</li><li>'i really wish i had the courage to drag a blade across my skin i wish i could do it i wish i could see the blood and feel that sweet release as it starts to pour out of my flesh and down my body'</li><li>'im sure they feel the more caring loving people in the kids lives the better'</li></ul> |
| 1 | <ul><li>'i am not feeling particularly creative'</li><li>'id probably go with none on and hope that my date admires a confident girl who feels fine without makeup'</li><li>'i woke on saturday feeling a little brighter and was very keen to get outdoors after spending all day friday wallowing in self pity'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.439 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-amazoncf")
# Run inference
preds = model("i am feeling very indecisive and spontaneous")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 17.6458 | 55 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
| 3 | 8 |
| 4 | 8 |
| 5 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.2972 | - |
| 0.2083 | 50 | 0.1452 | - |
| 0.4167 | 100 | 0.0452 | - |
| 0.625 | 150 | 0.0085 | - |
| 0.8333 | 200 | 0.0011 | - |
### Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "i honestly thought impossible at this point i feel pretty"}, {"text": "i feel convinced that im going to shy away from whatever is really good for me"}, {"text": "i feel guilt that i should be more caring and im not"}, {"text": "i found myself feeling nostalgic as i thought about the temporarily abandoned little bishop chronicles"}, {"text": "i am feeling very indecisive and spontaneous"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.439, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,488 |
ksg-dfci/trialstest
|
ksg-dfci
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"qwen2",
"sentence-similarity",
"medical",
"clinicaltrials",
"cancer",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1395384",
"loss:OnlineContrastiveLoss",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"en",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:finetune:NovaSearch/stella_en_1.5B_v5",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-30T22:19:46Z |
2024-12-30T22:50:09+00:00
| 8 | 0 |
---
base_model:
- dunzhang/stella_en_1.5B_v5
language:
- en
library_name: sentence-transformers
license: cc-by-nc-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- sentence-similarity
- medical
- clinicaltrials
- cancer
- feature-extraction
- generated_from_trainer
- dataset_size:1395384
- loss:OnlineContrastiveLoss
- loss:MultipleNegativesRankingLoss
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
| null |
BioNLP
|
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
|
{"base_model": ["dunzhang/stella_en_1.5B_v5"], "language": ["en"], "library_name": "sentence-transformers", "license": "cc-by-nc-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "sentence-similarity", "medical", "clinicaltrials", "cancer", "feature-extraction", "generated_from_trainer", "dataset_size:1395384", "loss:OnlineContrastiveLoss", "loss:MultipleNegativesRankingLoss"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,489 |
gokuls/add_BERT_24_sst2
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-16T11:09:08Z |
2023-06-16T11:54:48+00:00
| 10 | 0 |
---
datasets:
- glue
language:
- en
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: add_BERT_24_sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- type: accuracy
value: 0.7844036697247706
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_BERT_24_sst2
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4505
- Accuracy: 0.7844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.537 | 1.0 | 527 | 0.4505 | 0.7844 |
| 0.2966 | 2.0 | 1054 | 0.5377 | 0.8073 |
| 0.2313 | 3.0 | 1581 | 0.4930 | 0.8016 |
| 0.1989 | 4.0 | 2108 | 0.5843 | 0.7959 |
| 0.1696 | 5.0 | 2635 | 0.5200 | 0.8039 |
| 0.1491 | 6.0 | 3162 | 0.6076 | 0.7856 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_BERT_24_sst2
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4505
- Accuracy: 0.7844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.537 | 1.0 | 527 | 0.4505 | 0.7844 |
| 0.2966 | 2.0 | 1054 | 0.5377 | 0.8073 |
| 0.2313 | 3.0 | 1581 | 0.4930 | 0.8016 |
| 0.1989 | 4.0 | 2108 | 0.5843 | 0.7959 |
| 0.1696 | 5.0 | 2635 | 0.5200 | 0.8039 |
| 0.1491 | 6.0 | 3162 | 0.6076 | 0.7856 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
{"datasets": ["glue"], "language": ["en"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "add_BERT_24_sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE SST2", "type": "glue", "config": "sst2", "split": "validation", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.7844036697247706, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,490 |
AhmedSSoliman/LUKE-Marian-Model-on-DJANGO
|
AhmedSSoliman
|
translation
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"Machine Translation ",
"Code Generation",
"Text Generation",
"translation",
"en",
"dataset:AhmedSSoliman/DJANGO",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-11T22:18:29Z |
2023-07-30T11:57:00+00:00
| 13 | 0 |
---
datasets:
- AhmedSSoliman/DJANGO
language:
- en
license: mit
metrics:
- bleu
- accuracy
pipeline_tag: translation
tags:
- 'Machine Translation '
- Code Generation
- Text Generation
---
| null |
Non_BioNLP
|
{"datasets": ["AhmedSSoliman/DJANGO"], "language": ["en"], "license": "mit", "metrics": ["bleu", "accuracy"], "pipeline_tag": "translation", "tags": ["Machine Translation ", "Code Generation", "Text Generation"]}
|
task
|
[
"TRANSLATION"
] | 43,491 |
|
climatebert/distilroberta-base-climate-commitment
|
climatebert
|
text-classification
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:climatebert/climate_commitments_actions",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-06-20T18:51:55+00:00
| 31,771 | 2 |
---
datasets:
- climatebert/climate_commitments_actions
language:
- en
license: apache-2.0
metrics:
- accuracy
---
# Model Card for distilroberta-base-climate-commitment
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into paragraphs being about climate commitments and actions and paragraphs not being about climate commitments and actions.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-commitment model is fine-tuned on our [climatebert/climate_commitments_actions](https://huggingface.co/climatebert/climate_commitments_actions) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_commitments_actions"
model_name = "climatebert/distilroberta-base-climate-commitment"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
```
| null |
Non_BioNLP
|
# Model Card for distilroberta-base-climate-commitment
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into paragraphs being about climate commitments and actions and paragraphs not being about climate commitments and actions.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-commitment model is fine-tuned on our [climatebert/climate_commitments_actions](https://huggingface.co/climatebert/climate_commitments_actions) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_commitments_actions"
model_name = "climatebert/distilroberta-base-climate-commitment"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
```
|
{"datasets": ["climatebert/climate_commitments_actions"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,492 |
Helsinki-NLP/opus-mt-en-ca
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:29:09+00:00
| 2,084 | 1 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-en-ca
* source languages: en
* target languages: ca
* OPUS readme: [en-ca](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ca/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ca | 47.2 | 0.665 |
| null |
Non_BioNLP
|
### opus-mt-en-ca
* source languages: en
* target languages: ca
* OPUS readme: [en-ca](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ca/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ca | 47.2 | 0.665 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 43,493 |
aaraki/marian-finetuned-kde4-en-to-fr
|
aaraki
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-03-02T01:54:57+00:00
| 118 | 0 |
---
datasets:
- kde4
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- type: bleu
value: 52.94560734092563
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 52.9456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 52.9456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.94560734092563, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 43,494 |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task800
|
Lots-of-LoRAs
| null |
[
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 2025-01-02T14:46:02Z |
2025-01-02T14:46:07+00:00
| 0 | 0 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task800
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task800_pawsx_spanish_japanese_translation
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task800_pawsx_spanish_japanese_translation sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task800
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task800_pawsx_spanish_japanese_translation
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task800_pawsx_spanish_japanese_translation sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
|
task
|
[
"TRANSLATION"
] | 43,495 |
cgus/granite-3.1-2b-instruct-exl2
|
cgus
|
text-generation
|
[
"granite",
"language",
"granite-3.1",
"text-generation",
"conversational",
"arxiv:0000.00000",
"base_model:ibm-granite/granite-3.1-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.1-2b-instruct",
"license:apache-2.0",
"4-bit",
"exl2",
"region:us"
] | 2025-01-22T21:54:24Z |
2025-01-22T22:06:07+00:00
| 18 | 0 |
---
base_model:
- ibm-granite/granite-3.1-2b-instruct
license: apache-2.0
pipeline_tag: text-generation
tags:
- language
- granite-3.1
inference: false
---
# Granite-3.1-2B-Instruct-exl2
Original model: [granite-3.1-2b-instruct](https://huggingface.co/ibm-granite/granite-3.1-2b-instruct)
Creator: [Granite Team, IBM](https://huggingface.co/ibm-granite)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.2.7 with the default dataset. This is the first version that supports Granite architecture.
So this model will work only with apps that have exllamav2 0.2.7 or newer.
It should work with newest TabbyAPI or Text-Generation-WebUI.
Also they require either a Nvidia RTX card on Windows or Nvidia RTX/AMD ROCm on Linux.
# Original model card
# Granite-3.1-2B-Instruct
**Model Summary:**
Granite-3.1-2B-Instruct is a 2B parameter long-context instruct model finetuned from Granite-3.1-2B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
- **Developers:** Granite Team, IBM
- **GitHub Repository:** [ibm-granite/granite-3.1-language-models](https://github.com/ibm-granite/granite-3.1-language-models)
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Paper:** [Granite 3.1 Language Models (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d)
- **Release Date**: December 18th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages.
**Intended Use:**
The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.
*Capabilities*
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases
* Long-context tasks including long document/meeting summarization, long document QA, etc.
**Generation:**
This is a simple example of how to use Granite-3.1-2B-Instruct model.
Install the following libraries:
```shell
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
```
Then, copy the snippet from the section that is relevant for your use case.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "auto"
model_path = "ibm-granite/granite-3.1-2b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
{ "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens,
max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)
```
**Model Architecture:**
Granite-3.1-2B-Instruct is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.
| Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
| :-------- | :--------| :-------- | :------| :------|
| Embedding size | **2048** | 4096 | 1024 | 1536 |
| Number of layers | **40** | 40 | 24 | 32 |
| Attention head size | **64** | 128 | 64 | 64 |
| Number of attention heads | **32** | 32 | 16 | 24 |
| Number of KV heads | **8** | 8 | 8 | 8 |
| MLP hidden size | **8192** | 12800 | 512 | 512 |
| MLP activation | **SwiGLU** | SwiGLU | SwiGLU | SwiGLU |
| Number of experts | **—** | — | 32 | 40 |
| MoE TopK | **—** | — | 8 | 8 |
| Initialization std | **0.1** | 0.1 | 0.1 | 0.1 |
| Sequence length | **128K** | 128K | 128K | 128K |
| Position embedding | **RoPE** | RoPE | RoPE | RoPE |
| # Parameters | **2.5B** | 8.1B | 1.3B | 3.3B |
| # Active parameters | **2.5B** | 8.1B | 400M | 800M |
| # Training tokens | **12T** | 12T | 10T | 10T |
**Training Data:**
Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities including long-context tasks, and (3) very small amounts of human-curated data. A detailed attribution of datasets can be found in the [Granite 3.0 Technical Report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf), [Granite 3.1 Technical Report (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf).
**Infrastructure:**
We train Granite 3.1 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
**Ethical Considerations and Limitations:**
Granite 3.1 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.
**Resources**
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
<!-- ## Citation
```
@misc{granite-models,
author = {author 1, author2, ...},
title = {},
journal = {},
volume = {},
year = {2024},
url = {https://arxiv.org/abs/0000.00000},
}
``` -->
| null |
Non_BioNLP
|
# Granite-3.1-2B-Instruct-exl2
Original model: [granite-3.1-2b-instruct](https://huggingface.co/ibm-granite/granite-3.1-2b-instruct)
Creator: [Granite Team, IBM](https://huggingface.co/ibm-granite)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/granite-3.1-2b-instruct-exl2/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.2.7 with the default dataset. This is the first version that supports Granite architecture.
So this model will work only with apps that have exllamav2 0.2.7 or newer.
It should work with newest TabbyAPI or Text-Generation-WebUI.
Also they require either a Nvidia RTX card on Windows or Nvidia RTX/AMD ROCm on Linux.
# Original model card
# Granite-3.1-2B-Instruct
**Model Summary:**
Granite-3.1-2B-Instruct is a 2B parameter long-context instruct model finetuned from Granite-3.1-2B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
- **Developers:** Granite Team, IBM
- **GitHub Repository:** [ibm-granite/granite-3.1-language-models](https://github.com/ibm-granite/granite-3.1-language-models)
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Paper:** [Granite 3.1 Language Models (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d)
- **Release Date**: December 18th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages.
**Intended Use:**
The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.
*Capabilities*
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases
* Long-context tasks including long document/meeting summarization, long document QA, etc.
**Generation:**
This is a simple example of how to use Granite-3.1-2B-Instruct model.
Install the following libraries:
```shell
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
```
Then, copy the snippet from the section that is relevant for your use case.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "auto"
model_path = "ibm-granite/granite-3.1-2b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
{ "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens,
max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)
```
**Model Architecture:**
Granite-3.1-2B-Instruct is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.
| Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
| :-------- | :--------| :-------- | :------| :------|
| Embedding size | **2048** | 4096 | 1024 | 1536 |
| Number of layers | **40** | 40 | 24 | 32 |
| Attention head size | **64** | 128 | 64 | 64 |
| Number of attention heads | **32** | 32 | 16 | 24 |
| Number of KV heads | **8** | 8 | 8 | 8 |
| MLP hidden size | **8192** | 12800 | 512 | 512 |
| MLP activation | **SwiGLU** | SwiGLU | SwiGLU | SwiGLU |
| Number of experts | **—** | — | 32 | 40 |
| MoE TopK | **—** | — | 8 | 8 |
| Initialization std | **0.1** | 0.1 | 0.1 | 0.1 |
| Sequence length | **128K** | 128K | 128K | 128K |
| Position embedding | **RoPE** | RoPE | RoPE | RoPE |
| # Parameters | **2.5B** | 8.1B | 1.3B | 3.3B |
| # Active parameters | **2.5B** | 8.1B | 400M | 800M |
| # Training tokens | **12T** | 12T | 10T | 10T |
**Training Data:**
Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities including long-context tasks, and (3) very small amounts of human-curated data. A detailed attribution of datasets can be found in the [Granite 3.0 Technical Report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf), [Granite 3.1 Technical Report (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf).
**Infrastructure:**
We train Granite 3.1 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
**Ethical Considerations and Limitations:**
Granite 3.1 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.
**Resources**
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
<!-- ## Citation
```
@misc{granite-models,
author = {author 1, author2, ...},
title = {},
journal = {},
volume = {},
year = {2024},
url = {https://arxiv.org/abs/0000.00000},
}
``` -->
|
{"base_model": ["ibm-granite/granite-3.1-2b-instruct"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["language", "granite-3.1"], "inference": false}
|
task
|
[
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | 43,496 |
MultiBertGunjanPatrick/multiberts-seed-0-40k
|
MultiBertGunjanPatrick
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-0",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-10-04T04:55:04+00:00
| 102 | 0 |
---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-0
---
# MultiBERTs Seed 0 Checkpoint 40k (uncased)
Seed 0 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-40k')
model = BertModel.from_pretrained("multiberts-seed-0-40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| null |
Non_BioNLP
|
# MultiBERTs Seed 0 Checkpoint 40k (uncased)
Seed 0 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-40k')
model = BertModel.from_pretrained("multiberts-seed-0-40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-0"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 43,497 |
angelitasr/bge-base-en-v1.5_eeid_all
|
angelitasr
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4370",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-07T19:18:31Z |
2024-12-07T19:27:26+00:00
| 4 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4370
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: '###Question###:Area Units-Convert from km² to m²-\( 2 \mathrm{~km}^{2}
\) is the same as _____ \( m^{2} \)
###Correct Answer###:\( 2000000 \)
###Misconcepted Incorrect answer###:\( 2000 \)'
sentences:
- Confuses an equation with an identity
- Does not square the conversion factor when converting squared units
- Rounds to wrong degree of accuracy (decimal places rather than significant figures)
- source_sentence: '###Question###:Basic Angle Facts (straight line, opposite, around
a point, etc)-Find missing angles using angles around a point-What is the size
of angle \( x \) ? ![Angles around a point, split into 2 parts. One is labelled
310 degrees and the other x.]()
###Correct Answer###:\( 50^{\circ} \)
###Misconcepted Incorrect answer###:\( 310^{\circ} \)'
sentences:
- Believes the arrows for parallel lines mean equal length
- Rounds to the wrong degree of accuracy (rounds too little)
- Incorrectly identifies angles as vertically opposite
- source_sentence: '###Question###:BIDMAS-Use the order of operations to carry out
calculations involving addition, subtraction, multiplication, and/or division-\[
10-8 \times 7+6=
\]
Which calculation should you do first?
###Correct Answer###:\( 8 \times 7 \)
###Misconcepted Incorrect answer###:\( 7+6 \)'
sentences:
- Ignores the negative sign
- Carries out operations from right to left regardless of priority order
- In repeated percentage change, believes the second change is only a percentage
of the first change, without including the original
- source_sentence: '###Question###:Multiples and Lowest Common Multiple-Identify common
multiples of three or more numbers-Which of the following numbers is a common
multiple of \( 4,6 \) and \( 12 \) ?
###Correct Answer###:\( 12 \)
###Misconcepted Incorrect answer###:\( 2 \)'
sentences:
- Confuses factors and multiples
- 'Does not know that to factorise a quadratic expression, to find two numbers that
add to give the coefficient of the x term, and multiply to give the non variable
term
'
- Does not link Pythagoras Theorem to finding distance between two points
- source_sentence: '###Question###:Combined Events-Calculate the probability of two
independent events occurring without drawing a tree diagram-![Two spinners shown.
The first spinner has the numbers 1-4 and the second spinner has the number 1-5.]()
You spin the above fair spinners
What is the probability of getting a \( 1 \) on both spinners?
###Correct Answer###:\( \frac{1}{20} \)
###Misconcepted Incorrect answer###:\( \frac{1}{9} \)'
sentences:
- When multiplying fractions, multiplies the numerator and adds the denominator
- Does not follow the arrows through a function machine, changes the order of the
operations asked.
- Believes a curve can show a constant rate
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'###Question###:Combined Events-Calculate the probability of two independent events occurring without drawing a tree diagram-![Two spinners shown. The first spinner has the numbers 1-4 and the second spinner has the number 1-5.]() You spin the above fair spinners\nWhat is the probability of getting a \\( 1 \\) on both spinners?\n###Correct Answer###:\\( \\frac{1}{20} \\)\n###Misconcepted Incorrect answer###:\\( \\frac{1}{9} \\)',
'When multiplying fractions, multiplies the numerator and adds the denominator',
'Does not follow the arrows through a function machine, changes the order of the operations asked.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,370 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 60 tokens</li><li>mean: 122.91 tokens</li><li>max: 435 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.81 tokens</li><li>max: 39 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>###Question###:Simplifying Algebraic Fractions-Simplify an algebraic fraction by factorising the numerator-Simplify the following, if possible: \( \frac{m^{2}+2 m-3}{m-3} \)<br>###Correct Answer###:Does not simplify<br>###Misconcepted Incorrect answer###:\( m+1 \)</code> | <code>Does not know that to factorise a quadratic expression, to find two numbers that add to give the coefficient of the x term, and multiply to give the non variable term<br></code> |
| <code>###Question###:Range and Interquartile Range from a List of Data-Calculate the range from a list of data-Tom and Katie are discussing the \( 5 \) plants with these heights:<br>\( 24 \mathrm{~cm}, 17 \mathrm{~cm}, 42 \mathrm{~cm}, 26 \mathrm{~cm}, 13 \mathrm{~cm} \)<br>Tom says if all the plants were cut in half, the range wouldn't change.<br>Katie says if all the plants grew by \( 3 \mathrm{~cm} \) each, the range wouldn't change.<br>Who do you agree with?<br>###Correct Answer###:Only<br>Katie<br>###Misconcepted Incorrect answer###:Only<br>Tom</code> | <code>Believes if you changed all values by the same proportion the range would not change</code> |
| <code>###Question###:Properties of Quadrilaterals-Recall and use the intersecting diagonals properties of a rectangle-The angles highlighted on this rectangle with different length sides can never be... ![A rectangle with the diagonals drawn in. The angle on the right hand side at the centre is highlighted in red and the angle at the bottom at the centre is highlighted in yellow.]()<br>###Correct Answer###:\( 90^{\circ} \)<br>###Misconcepted Incorrect answer###:acute</code> | <code>Does not know the properties of a rectangle</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 10
- `fp16`: True
- `push_to_hub`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.9141 | 500 | 0.3742 |
| 1.8282 | 1000 | 0.1576 |
| 2.7422 | 1500 | 0.0786 |
| 3.6563 | 2000 | 0.037 |
| 4.5704 | 2500 | 0.0239 |
| 5.4845 | 3000 | 0.0153 |
| 6.3985 | 3500 | 0.0087 |
| 7.3126 | 4000 | 0.0046 |
| 8.2267 | 4500 | 0.0043 |
| 9.1408 | 5000 | 0.003 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'###Question###:Combined Events-Calculate the probability of two independent events occurring without drawing a tree diagram-![Two spinners shown. The first spinner has the numbers 1-4 and the second spinner has the number 1-5.]() You spin the above fair spinners\nWhat is the probability of getting a \\( 1 \\) on both spinners?\n###Correct Answer###:\\( \\frac{1}{20} \\)\n###Misconcepted Incorrect answer###:\\( \\frac{1}{9} \\)',
'When multiplying fractions, multiplies the numerator and adds the denominator',
'Does not follow the arrows through a function machine, changes the order of the operations asked.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,370 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 60 tokens</li><li>mean: 122.91 tokens</li><li>max: 435 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.81 tokens</li><li>max: 39 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>###Question###:Simplifying Algebraic Fractions-Simplify an algebraic fraction by factorising the numerator-Simplify the following, if possible: \( \frac{m^{2}+2 m-3}{m-3} \)<br>###Correct Answer###:Does not simplify<br>###Misconcepted Incorrect answer###:\( m+1 \)</code> | <code>Does not know that to factorise a quadratic expression, to find two numbers that add to give the coefficient of the x term, and multiply to give the non variable term<br></code> |
| <code>###Question###:Range and Interquartile Range from a List of Data-Calculate the range from a list of data-Tom and Katie are discussing the \( 5 \) plants with these heights:<br>\( 24 \mathrm{~cm}, 17 \mathrm{~cm}, 42 \mathrm{~cm}, 26 \mathrm{~cm}, 13 \mathrm{~cm} \)<br>Tom says if all the plants were cut in half, the range wouldn't change.<br>Katie says if all the plants grew by \( 3 \mathrm{~cm} \) each, the range wouldn't change.<br>Who do you agree with?<br>###Correct Answer###:Only<br>Katie<br>###Misconcepted Incorrect answer###:Only<br>Tom</code> | <code>Believes if you changed all values by the same proportion the range would not change</code> |
| <code>###Question###:Properties of Quadrilaterals-Recall and use the intersecting diagonals properties of a rectangle-The angles highlighted on this rectangle with different length sides can never be... ![A rectangle with the diagonals drawn in. The angle on the right hand side at the centre is highlighted in red and the angle at the bottom at the centre is highlighted in yellow.]()<br>###Correct Answer###:\( 90^{\circ} \)<br>###Misconcepted Incorrect answer###:acute</code> | <code>Does not know the properties of a rectangle</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 10
- `fp16`: True
- `push_to_hub`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.9141 | 500 | 0.3742 |
| 1.8282 | 1000 | 0.1576 |
| 2.7422 | 1500 | 0.0786 |
| 3.6563 | 2000 | 0.037 |
| 4.5704 | 2500 | 0.0239 |
| 5.4845 | 3000 | 0.0153 |
| 6.3985 | 3500 | 0.0087 |
| 7.3126 | 4000 | 0.0046 |
| 8.2267 | 4500 | 0.0043 |
| 9.1408 | 5000 | 0.003 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4370", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "###Question###:Area Units-Convert from km² to m²-\\( 2 \\mathrm{~km}^{2} \\) is the same as _____ \\( m^{2} \\)\n###Correct Answer###:\\( 2000000 \\)\n###Misconcepted Incorrect answer###:\\( 2000 \\)", "sentences": ["Confuses an equation with an identity", "Does not square the conversion factor when converting squared units", "Rounds to wrong degree of accuracy (decimal places rather than significant figures)"]}, {"source_sentence": "###Question###:Basic Angle Facts (straight line, opposite, around a point, etc)-Find missing angles using angles around a point-What is the size of angle \\( x \\) ? ![Angles around a point, split into 2 parts. One is labelled 310 degrees and the other x.]()\n###Correct Answer###:\\( 50^{\\circ} \\)\n###Misconcepted Incorrect answer###:\\( 310^{\\circ} \\)", "sentences": ["Believes the arrows for parallel lines mean equal length", "Rounds to the wrong degree of accuracy (rounds too little)", "Incorrectly identifies angles as vertically opposite"]}, {"source_sentence": "###Question###:BIDMAS-Use the order of operations to carry out calculations involving addition, subtraction, multiplication, and/or division-\\[\n10-8 \\times 7+6=\n\\]\n\nWhich calculation should you do first?\n###Correct Answer###:\\( 8 \\times 7 \\)\n###Misconcepted Incorrect answer###:\\( 7+6 \\)", "sentences": ["Ignores the negative sign", "Carries out operations from right to left regardless of priority order", "In repeated percentage change, believes the second change is only a percentage of the first change, without including the original"]}, {"source_sentence": "###Question###:Multiples and Lowest Common Multiple-Identify common multiples of three or more numbers-Which of the following numbers is a common multiple of \\( 4,6 \\) and \\( 12 \\) ?\n###Correct Answer###:\\( 12 \\)\n###Misconcepted Incorrect answer###:\\( 2 \\)", "sentences": ["Confuses factors and multiples", "Does not know that to factorise a quadratic expression, to find two numbers that add to give the coefficient of the x term, and multiply to give the non variable term\n", "Does not link Pythagoras Theorem to finding distance between two points"]}, {"source_sentence": "###Question###:Combined Events-Calculate the probability of two independent events occurring without drawing a tree diagram-![Two spinners shown. The first spinner has the numbers 1-4 and the second spinner has the number 1-5.]() You spin the above fair spinners\nWhat is the probability of getting a \\( 1 \\) on both spinners?\n###Correct Answer###:\\( \\frac{1}{20} \\)\n###Misconcepted Incorrect answer###:\\( \\frac{1}{9} \\)", "sentences": ["When multiplying fractions, multiplies the numerator and adds the denominator", "Does not follow the arrows through a function machine, changes the order of the operations asked.", "Believes a curve can show a constant rate"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,498 |
OpenLLM-France/Lucie-7B-optimizer-states
|
OpenLLM-France
|
text-generation
|
[
"llama",
"pretrained",
"llama-3",
"openllm-france",
"text-generation",
"fr",
"en",
"it",
"de",
"es",
"dataset:OpenLLM-France/Lucie-Training-Dataset",
"license:apache-2.0",
"region:us"
] | 2024-10-14T11:50:50Z |
2025-01-14T09:21:04+00:00
| 9 | 0 |
---
datasets:
- OpenLLM-France/Lucie-Training-Dataset
language:
- fr
- en
- it
- de
- es
license: apache-2.0
pipeline_tag: text-generation
tags:
- pretrained
- llama-3
- openllm-france
widget:
- text: 'Quelle est la capitale de l''Espagne ? Madrid.
Quelle est la capitale de la France ?'
example_title: Capital cities in French
group: 1-shot Question Answering
---
# Model Card
This repository contains universal checkpoints in DeepSpeed format for the [Lucie-7B model](https://huggingface.co/OpenLLM-France/Lucie-7B),
which was trained using [this repository of code](https://github.com/OpenLLM-France/Lucie-Training)
based on [a fork of `Megatron-Deepspeed`](https://github.com/OpenLLM-France/Megatron-DeepSpeed).
Each checkpoint is in a subbranch (revision), which names specifies the number of training steps.
For instance `step0400000` corresponds to the checkpoint after 4M training steps.
Those checkpoints are provided so that the model can be retrained from a given point.
## Contact
[email protected]
| null |
Non_BioNLP
|
# Model Card
This repository contains universal checkpoints in DeepSpeed format for the [Lucie-7B model](https://huggingface.co/OpenLLM-France/Lucie-7B),
which was trained using [this repository of code](https://github.com/OpenLLM-France/Lucie-Training)
based on [a fork of `Megatron-Deepspeed`](https://github.com/OpenLLM-France/Megatron-DeepSpeed).
Each checkpoint is in a subbranch (revision), which names specifies the number of training steps.
For instance `step0400000` corresponds to the checkpoint after 4M training steps.
Those checkpoints are provided so that the model can be retrained from a given point.
## Contact
[email protected]
|
{"datasets": ["OpenLLM-France/Lucie-Training-Dataset"], "language": ["fr", "en", "it", "de", "es"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["pretrained", "llama-3", "openllm-france"], "widget": [{"text": "Quelle est la capitale de l'Espagne ? Madrid.\nQuelle est la capitale de la France ?", "example_title": "Capital cities in French", "group": "1-shot Question Answering"}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 43,499 |
Jayicebear/mt5-small-finetuned-amazon-en-es
|
Jayicebear
|
summarization
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-20T07:03:34Z |
2023-09-20T08:47:17+00:00
| 21 | 0 |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 22.4140
- eval_rouge1: 0.1884
- eval_rouge2: 0.0
- eval_rougeL: 0.1884
- eval_rougeLsum: 0.1884
- eval_runtime: 6.4772
- eval_samples_per_second: 36.744
- eval_steps_per_second: 1.235
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 22.4140
- eval_rouge1: 0.1884
- eval_rouge2: 0.0
- eval_rougeL: 0.1884
- eval_rougeLsum: 0.1884
- eval_runtime: 6.4772
- eval_samples_per_second: 36.744
- eval_steps_per_second: 1.235
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 43,500 |
RichardErkhov/firqaaa_-_indo-gemma-2b-alpaca-exl2
|
RichardErkhov
| null |
[
"region:us"
] | 2025-01-18T11:24:07Z |
2025-01-18T11:24:08+00:00
| 0 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
indo-gemma-2b-alpaca - EXL2
- Model creator: https://huggingface.co/firqaaa/
- Original model: https://huggingface.co/firqaaa/indo-gemma-2b-alpaca/
## Available sizes
| Branch | Bits | Description |
| ----- | ---- | ------------ |
| [8_0](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/8_0) | 8.0 | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/6_5) | 6.5 | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/5_0) | 5.0 | Slightly lower quality vs 6.5, but usable |
| [4_25](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/4_25) | 4.25 | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/3_5) | 3.5 | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2 indo-gemma-2b-alpaca-6_5
```
With huggingface hub:
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download firqaaa_-_indo-gemma-2b-alpaca-exl2 --revision 6_5 --local-dir indo-gemma-2b-alpaca-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download firqaaa_-_indo-gemma-2b-alpaca-exl2 --revision 6_5 --local-dir indo-gemma-2b-alpaca-6.5 --local-dir-use-symlinks False
```
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
pipeline_tag: text-generation
---
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Context Length
Models are trained on a context length of 8192 tokens.
### How to use
```python
# Prompt
alpaca_prompt = """Di bawah ini adalah instruksi yang menjelaskan tugas, dipasangkan dengan masukan yang memberikan konteks lebih lanjut. Tulis tanggapan yang melengkapi instruksi dengan tepat.
### Instruksi:
{}
### Masukan:
{}
### Tanggapan:
{}"""
max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
if True:
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "indo-gemma-2b-alpaca",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"Sebutkan langkah-langkah membuat nasi goreng!",
"", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt"
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 256)
```
### Uploaded model
- **Developed by:** firqaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
indo-gemma-2b-alpaca - EXL2
- Model creator: https://huggingface.co/firqaaa/
- Original model: https://huggingface.co/firqaaa/indo-gemma-2b-alpaca/
## Available sizes
| Branch | Bits | Description |
| ----- | ---- | ------------ |
| [8_0](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/8_0) | 8.0 | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/6_5) | 6.5 | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/5_0) | 5.0 | Slightly lower quality vs 6.5, but usable |
| [4_25](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/4_25) | 4.25 | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2/tree/3_5) | 3.5 | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/firqaaa_-_indo-gemma-2b-alpaca-exl2 indo-gemma-2b-alpaca-6_5
```
With huggingface hub:
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download firqaaa_-_indo-gemma-2b-alpaca-exl2 --revision 6_5 --local-dir indo-gemma-2b-alpaca-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download firqaaa_-_indo-gemma-2b-alpaca-exl2 --revision 6_5 --local-dir indo-gemma-2b-alpaca-6.5 --local-dir-use-symlinks False
```
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
pipeline_tag: text-generation
---
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Context Length
Models are trained on a context length of 8192 tokens.
### How to use
```python
# Prompt
alpaca_prompt = """Di bawah ini adalah instruksi yang menjelaskan tugas, dipasangkan dengan masukan yang memberikan konteks lebih lanjut. Tulis tanggapan yang melengkapi instruksi dengan tepat.
### Instruksi:
{}
### Masukan:
{}
### Tanggapan:
{}"""
max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
if True:
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "indo-gemma-2b-alpaca",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"Sebutkan langkah-langkah membuat nasi goreng!",
"", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt"
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 256)
```
### Uploaded model
- **Developed by:** firqaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 43,501 |
BeardedMonster/SabiYarn-125M-translate
|
BeardedMonster
|
text-generation
|
[
"transformers",
"safetensors",
"nanogpt-j",
"text-generation",
"custom_code",
"autotrain_compatible",
"region:us"
] | 2024-08-15T11:30:31Z |
2024-11-04T19:20:47+00:00
| 94 | 1 |
---
library_name: transformers
tags: []
---
# SabiYarn
Test the whole generation capabilities here: https://huggingface.co/spaces/BeardedMonster/SabiYarn_125M
Pretrained model on Nigerian languages including English using a causal language modeling (CLM) Multi-task objective.
## Model Details
### Model Description
SabiYarn-125M is the first of a series of transformer models (adopted from nanogpt and inspired by GPT-J's architecture) pretrained on a large corpus of Nigerian language data in a self-supervised fashion. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right.
The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens. It also makes sure attention
is not calculated across documents.
This way, the model learns an inner representation of the languages that can then be used to extract features useful for downstream tasks. The model is best at what
it was pretrained for however, which is generating coherent texts.
This is the smallest version, with 125M parameters.
- **Developed by:** Aletheia.ai Research Lab
- **Funded by [optional]:** Personal
- **Shared by [optional]:** Jeffreypaul
- **Model type:** GPTJX (Adopted from NanoGPT)
- **Language(s) (NLP):** Majorly English, Yoruba, Hausa, Igbo, Pidgin and some others: Fulah/Fulfulde, Efik, Urhobo.
### Model Sources [optional]
- **Demo:** https://huggingface.co/spaces/BeardedMonster/SabiYarn_125M
## Uses
You can use the raw model for text generation or fine-tune it to a downstream task.
## Bias, Risks, and Limitations
The training data used for this model is mostly an aggregation of data available on huggingface for nigerian languages. We know it contains a lot of unfiltered content from the internet, which is far from neutral.
Because large-scale language models of this size do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
Additionally, language models often reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import GenerationConfig
generation_config = GenerationConfig(
max_length=100, # Adjust this based on your translation requirements
max_new_tokens=50, # Ensure sufficient tokens for your translations
num_beams=5, # Moderate number of beams for a balance between speed and quality
do_sample=False, # Disable sampling to make output deterministic
temperature=1.0, # Neutral temperature since sampling is off
top_k=0, # Disable top-k sampling (since sampling is off)
top_p=0, # Disable top-p (nucleus) sampling (since sampling is off)
repetition_penalty=4.0, # Neutral repetition penalty for translation
length_penalty=3.0, # No penalty for sequence length; modify if your translations tend to be too short/long
early_stopping=True # Stop early when all beams finish to speed up generation
)
repo_name = "BeardedMonster/SabiYarn-125M-translate"
tokenizer_name = "BeardedMonster/SabiYarn-125M"
model = AutoModelForCausalLM.from_pretrained(repo_name, trust_remote_code=True)
tokenizer= AutoTokenizer.from_pretrained(tokenizer_name, trust_remote_code=True)
Use the following tags for the following downstream tasks:
- Translation
```python
<translate> <yor>, <translate> .... <ibo>, <translate> ... <hau>, <translate> .... <efi>, <translate> .... <pcm>, <translate> ..... <urh>
```
You should typically put user's input between these 2 tags. Currently, model also doesnt perform very well on NER due to the scarce data on this.
### Model Architecture and Objective
Architecture is very similar to GPT-J
| null |
Non_BioNLP
|
# SabiYarn
Test the whole generation capabilities here: https://huggingface.co/spaces/BeardedMonster/SabiYarn_125M
Pretrained model on Nigerian languages including English using a causal language modeling (CLM) Multi-task objective.
## Model Details
### Model Description
SabiYarn-125M is the first of a series of transformer models (adopted from nanogpt and inspired by GPT-J's architecture) pretrained on a large corpus of Nigerian language data in a self-supervised fashion. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right.
The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens. It also makes sure attention
is not calculated across documents.
This way, the model learns an inner representation of the languages that can then be used to extract features useful for downstream tasks. The model is best at what
it was pretrained for however, which is generating coherent texts.
This is the smallest version, with 125M parameters.
- **Developed by:** Aletheia.ai Research Lab
- **Funded by [optional]:** Personal
- **Shared by [optional]:** Jeffreypaul
- **Model type:** GPTJX (Adopted from NanoGPT)
- **Language(s) (NLP):** Majorly English, Yoruba, Hausa, Igbo, Pidgin and some others: Fulah/Fulfulde, Efik, Urhobo.
### Model Sources [optional]
- **Demo:** https://huggingface.co/spaces/BeardedMonster/SabiYarn_125M
## Uses
You can use the raw model for text generation or fine-tune it to a downstream task.
## Bias, Risks, and Limitations
The training data used for this model is mostly an aggregation of data available on huggingface for nigerian languages. We know it contains a lot of unfiltered content from the internet, which is far from neutral.
Because large-scale language models of this size do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
Additionally, language models often reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import GenerationConfig
generation_config = GenerationConfig(
max_length=100, # Adjust this based on your translation requirements
max_new_tokens=50, # Ensure sufficient tokens for your translations
num_beams=5, # Moderate number of beams for a balance between speed and quality
do_sample=False, # Disable sampling to make output deterministic
temperature=1.0, # Neutral temperature since sampling is off
top_k=0, # Disable top-k sampling (since sampling is off)
top_p=0, # Disable top-p (nucleus) sampling (since sampling is off)
repetition_penalty=4.0, # Neutral repetition penalty for translation
length_penalty=3.0, # No penalty for sequence length; modify if your translations tend to be too short/long
early_stopping=True # Stop early when all beams finish to speed up generation
)
repo_name = "BeardedMonster/SabiYarn-125M-translate"
tokenizer_name = "BeardedMonster/SabiYarn-125M"
model = AutoModelForCausalLM.from_pretrained(repo_name, trust_remote_code=True)
tokenizer= AutoTokenizer.from_pretrained(tokenizer_name, trust_remote_code=True)
Use the following tags for the following downstream tasks:
- Translation
```python
<translate> <yor>, <translate> .... <ibo>, <translate> ... <hau>, <translate> .... <efi>, <translate> .... <pcm>, <translate> ..... <urh>
```
You should typically put user's input between these 2 tags. Currently, model also doesnt perform very well on NER due to the scarce data on this.
### Model Architecture and Objective
Architecture is very similar to GPT-J
|
{"library_name": "transformers", "tags": []}
|
task
|
[
"TRANSLATION"
] | 43,502 |
MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli
|
MoritzLaurer
|
zero-shot-classification
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"nli",
"multilingual",
"en",
"ar",
"bg",
"de",
"el",
"es",
"fr",
"hi",
"ru",
"sw",
"th",
"tr",
"ur",
"vi",
"zh",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:2002.10957",
"arxiv:1809.05053",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-11T13:10:37Z |
2024-04-22T11:03:52+00:00
| 3,251 | 38 |
---
datasets:
- multi_nli
- xnli
language:
- multilingual
- en
- ar
- bg
- de
- el
- es
- fr
- hi
- ru
- sw
- th
- tr
- ur
- vi
- zh
license: mit
metrics:
- accuracy
pipeline_tag: zero-shot-classification
tags:
- zero-shot-classification
- text-classification
- nli
- pytorch
widget:
- text: Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU
candidate_labels: politics, economy, entertainment, environment
---
---
# Multilingual MiniLMv2-L6-mnli-xnli
## Model description
This multilingual model can perform natural language inference (NLI) on 100+ languages and is therefore also
suitable for multilingual zero-shot classification. The underlying multilingual-MiniLM-L6 model was created
by Microsoft and was distilled from XLM-RoBERTa-large (see details [in the original paper](https://arxiv.org/pdf/2002.10957.pdf)
and newer information in [this repo](https://github.com/microsoft/unilm/tree/master/minilm)).
The model was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages,
as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
The main advantage of distilled models is that they are smaller (faster inference, lower memory requirements) than their teachers (XLM-RoBERTa-large).
The disadvantage is that they lose some of the performance of their larger teachers.
For highest inference speed, I recommend using this 6-layer model. For higher performance I recommend
[mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) (as of 14.02.2023).
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the XNLI development dataset and the MNLI train dataset.
The XNLI development set consists of 2490 professionally translated texts from English
to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)).
Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages,
but due to quality issues with these machine translations, this model was only trained on the professional translations
from the XNLI development set and the original English MNLI training set (392 702 texts).
Not using machine translated texts can avoid overfitting the model to the 15 languages;
avoids catastrophic forgetting of the other languages it was pre-trained on;
and significantly reduces training costs.
### Training procedure
The model was trained using the Hugging Face trainer with the following hyperparameters.
The exact underlying model is [mMiniLMv2-L6-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large).
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=4e-05,
per_device_train_batch_size=64, # batch size per device during training
per_device_eval_batch_size=120, # batch size for evaluation
warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
)
```
### Eval results
The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total).
Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data
in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on
the other languages it was training on, but performance is most likely lower than for those languages available in XNLI.
The average XNLI performance of multilingual-MiniLM-L6 reported in the paper is 0.68 ([see table 11](https://arxiv.org/pdf/2002.10957.pdf)).
This reimplementation has an average performance of 0.713.
This increase in performance is probably thanks to the addition of MNLI in the training data and this model was distilled from
XLM-RoBERTa-large instead of -base (multilingual-MiniLM-L6-v2).
|Datasets|avg_xnli|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.713|0.687|0.742|0.719|0.723|0.789|0.748|0.741|0.691|0.714|0.642|0.699|0.696|0.664|0.723|0.721|
|Speed text/sec (A100 GPU, eval_batch=120)|6093.0|6210.0|6003.0|6053.0|5409.0|6531.0|6205.0|5615.0|5734.0|5970.0|6219.0|6289.0|6533.0|5851.0|5970.0|6798.0|
|Datasets|mnli_m|mnli_mm|
| :---: | :---: | :---: |
|Accuracy|0.782|0.8|
|Speed text/sec (A100 GPU, eval_batch=120)|4430.0|4395.0|
## Limitations and bias
Please consult the original paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022.
‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’.
Preprint, June. Open Science Framework. https://osf.io/74b8k.
## Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
| null |
Non_BioNLP
|
---
# Multilingual MiniLMv2-L6-mnli-xnli
## Model description
This multilingual model can perform natural language inference (NLI) on 100+ languages and is therefore also
suitable for multilingual zero-shot classification. The underlying multilingual-MiniLM-L6 model was created
by Microsoft and was distilled from XLM-RoBERTa-large (see details [in the original paper](https://arxiv.org/pdf/2002.10957.pdf)
and newer information in [this repo](https://github.com/microsoft/unilm/tree/master/minilm)).
The model was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages,
as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
The main advantage of distilled models is that they are smaller (faster inference, lower memory requirements) than their teachers (XLM-RoBERTa-large).
The disadvantage is that they lose some of the performance of their larger teachers.
For highest inference speed, I recommend using this 6-layer model. For higher performance I recommend
[mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) (as of 14.02.2023).
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the XNLI development dataset and the MNLI train dataset.
The XNLI development set consists of 2490 professionally translated texts from English
to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)).
Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages,
but due to quality issues with these machine translations, this model was only trained on the professional translations
from the XNLI development set and the original English MNLI training set (392 702 texts).
Not using machine translated texts can avoid overfitting the model to the 15 languages;
avoids catastrophic forgetting of the other languages it was pre-trained on;
and significantly reduces training costs.
### Training procedure
The model was trained using the Hugging Face trainer with the following hyperparameters.
The exact underlying model is [mMiniLMv2-L6-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large).
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=4e-05,
per_device_train_batch_size=64, # batch size per device during training
per_device_eval_batch_size=120, # batch size for evaluation
warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
)
```
### Eval results
The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total).
Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data
in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on
the other languages it was training on, but performance is most likely lower than for those languages available in XNLI.
The average XNLI performance of multilingual-MiniLM-L6 reported in the paper is 0.68 ([see table 11](https://arxiv.org/pdf/2002.10957.pdf)).
This reimplementation has an average performance of 0.713.
This increase in performance is probably thanks to the addition of MNLI in the training data and this model was distilled from
XLM-RoBERTa-large instead of -base (multilingual-MiniLM-L6-v2).
|Datasets|avg_xnli|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.713|0.687|0.742|0.719|0.723|0.789|0.748|0.741|0.691|0.714|0.642|0.699|0.696|0.664|0.723|0.721|
|Speed text/sec (A100 GPU, eval_batch=120)|6093.0|6210.0|6003.0|6053.0|5409.0|6531.0|6205.0|5615.0|5734.0|5970.0|6219.0|6289.0|6533.0|5851.0|5970.0|6798.0|
|Datasets|mnli_m|mnli_mm|
| :---: | :---: | :---: |
|Accuracy|0.782|0.8|
|Speed text/sec (A100 GPU, eval_batch=120)|4430.0|4395.0|
## Limitations and bias
Please consult the original paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022.
‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’.
Preprint, June. Open Science Framework. https://osf.io/74b8k.
## Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
|
{"datasets": ["multi_nli", "xnli"], "language": ["multilingual", "en", "ar", "bg", "de", "el", "es", "fr", "hi", "ru", "sw", "th", "tr", "ur", "vi", "zh"], "license": "mit", "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification", "tags": ["zero-shot-classification", "text-classification", "nli", "pytorch"], "widget": [{"text": "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU", "candidate_labels": "politics, economy, entertainment, environment"}]}
|
task
|
[
"TRANSLATION"
] | 43,503 |
mdizak/all-roberta-large-v1-rust
|
mdizak
|
sentence-similarity
|
[
"sentence-transformers",
"pytorch",
"rust",
"safetensors",
"roberta",
"fill-mask",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-02T11:13:46Z |
2024-04-02T14:16:06+00:00
| 6 | 0 |
---
language: en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# all-roberta-large-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-roberta-large-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-roberta-large-v1')
model = AutoModel.from_pretrained('sentence-transformers/all-roberta-large-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-roberta-large-v1)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`roberta-large`](https://huggingface.co/roberta-large) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 128 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`roberta-large`](https://huggingface.co/roberta-large). Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 400k steps using a batch size of 256 (32 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,124,818,467** |
| null |
Non_BioNLP
|
# all-roberta-large-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-roberta-large-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-roberta-large-v1')
model = AutoModel.from_pretrained('sentence-transformers/all-roberta-large-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-roberta-large-v1)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`roberta-large`](https://huggingface.co/roberta-large) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 128 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`roberta-large`](https://huggingface.co/roberta-large). Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 400k steps using a batch size of 256 (32 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,124,818,467** |
|
{"language": "en", "library_name": "sentence-transformers", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 43,504 |
niltonseixas/summarizer
|
niltonseixas
| null |
[
"adapter-transformers",
"pytorch",
"t5",
"pt",
"region:us"
] | 2024-01-29T17:47:01Z |
2024-11-17T18:42:34+00:00
| 0 | 0 |
---
language:
- pt
library_name: adapter-transformers
---
# Summarizer
<!-- Provide a quick summary of what the model is/does. -->
THis model aims to summarize general texts. It is just for studying purposes.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Nilton Seixas]
- **Language(s) (NLP):** [English]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [t5-small]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [niltonseixas/summarizer]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Example
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
```
from transformers import pipeline
```
```
summarizer = pipeline("summarization", model="niltonseixas/summarizer")
```
```
text = """summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs,
and energy costs. It's the most aggressive action on tackling the climate crisis in American history,
which will lift up American workers and create good-paying, union jobs across the country.
It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share.
And no one making under $400,000 per year will pay a penny more in taxes."""
```
```
summarizer(text)
```
| null |
Non_BioNLP
|
# Summarizer
<!-- Provide a quick summary of what the model is/does. -->
THis model aims to summarize general texts. It is just for studying purposes.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Nilton Seixas]
- **Language(s) (NLP):** [English]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [t5-small]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [niltonseixas/summarizer]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Example
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
```
from transformers import pipeline
```
```
summarizer = pipeline("summarization", model="niltonseixas/summarizer")
```
```
text = """summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs,
and energy costs. It's the most aggressive action on tackling the climate crisis in American history,
which will lift up American workers and create good-paying, union jobs across the country.
It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share.
And no one making under $400,000 per year will pay a penny more in taxes."""
```
```
summarizer(text)
```
|
{"language": ["pt"], "library_name": "adapter-transformers"}
|
task
|
[
"SUMMARIZATION"
] | 43,505 |
prithivMLmods/Acrux-500M-o1-Journey
|
prithivMLmods
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"Qwen2.5",
"Llama-Cpp",
"CoT",
"o1-journey",
"text-generation-inference",
"Ollama",
"conversational",
"en",
"dataset:GAIR/o1-journey",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-07T20:54:42Z |
2024-12-20T01:56:23+00:00
| 68 | 2 |
---
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
datasets:
- GAIR/o1-journey
language:
- en
library_name: transformers
license: creativeml-openrail-m
pipeline_tag: text-generation
tags:
- Qwen2.5
- Llama-Cpp
- CoT
- o1-journey
- text-generation-inference
- safetensors
- Ollama
---
### Acrux-500M-o1-Journey Model Files
The **Acrux-500M-o1-Journey** is a lightweight, instruction-tuned language model fine-tuned from the **Qwen2.5-0.5B-Instruct** base model. With a size of 500 million parameters, it is designed for **cost-effective deployment** and **fast text generation** while maintaining quality performance for instruction-following tasks.
| **File Name** | **Size** | **Description** | **Upload Status** |
|----------------------------|----------------|-------------------------------------------|--------------------|
| `.gitattributes` | 1.57 kB | Git attributes for managing LFS files. | Uploaded |
| `README.md` | 195 Bytes | Model overview or documentation. | Updated |
| `added_tokens.json` | 657 Bytes | Custom tokens for the tokenizer. | Uploaded |
| `config.json` | 859 Bytes | Model configuration file. | Uploaded |
| `generation_config.json` | 280 Bytes | Configuration for text generation. | Uploaded |
| `merges.txt` | 1.82 MB | Merge rules for byte-pair encoding (BPE). | Uploaded |
| `pytorch_model.bin` | 988 MB | Model weights (PyTorch format). | Uploaded (LFS) |
| `special_tokens_map.json` | 644 Bytes | Mapping for special tokens. | Uploaded |
| `tokenizer.json` | 11.4 MB | Full tokenizer configuration. | Uploaded (LFS) |
| `tokenizer_config.json` | 7.73 kB | Additional tokenizer settings. | Uploaded |
| `vocab.json` | 2.78 MB | Vocabulary for the tokenizer. | Uploaded |
### **Key Features:**
1. **Compact Size with Efficient Performance:**
The smaller parameter count (500M) ensures faster inference and reduced hardware requirements.
2. **Instruction Optimization:**
Fine-tuned to follow prompts effectively, making it suitable for interactive applications and prompt-based tasks.
3. **Domain-Specific Training:**
Trained on the **GAIR/o1-journey** dataset, providing tailored capabilities for specific use cases.
---
### **Training Details:**
- **Base Model:** [Qwen2.5-0.5B-Instruct](#)
- **Dataset Used for Fine-Tuning:** [GAIR/o1-journey](#)
- A compact dataset focusing on instruction-driven generation with 1.42k samples.
---
### **Capabilities:**
1. **Instruction Following:**
- Generates accurate and coherent responses to user instructions.
- Handles summarization, question-answering, and conversational tasks.
2. **Fast Inference:**
- Ideal for real-time applications due to reduced latency from its smaller size.
3. **Interactive AI Development:**
- Suitable for chatbots, virtual assistants, and instructional interfaces.
---
### **Usage Instructions:**
1. **Setup:**
Download all model files, ensuring compatibility with the Hugging Face Transformers library.
2. **Loading the Model:**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Acrux-500M-o1-Journey"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
3. **Sample Generate Text:**
```python
input_text = "Explain the concept of machine learning in simple terms."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
4. **Optimize Generation:**
Adjust parameters in `generation_config.json` for better control of output, such as:
- `temperature` for randomness.
- `top_p` for sampling diversity.
- `max_length` for output size.
---
| null |
Non_BioNLP
|
### Acrux-500M-o1-Journey Model Files
The **Acrux-500M-o1-Journey** is a lightweight, instruction-tuned language model fine-tuned from the **Qwen2.5-0.5B-Instruct** base model. With a size of 500 million parameters, it is designed for **cost-effective deployment** and **fast text generation** while maintaining quality performance for instruction-following tasks.
| **File Name** | **Size** | **Description** | **Upload Status** |
|----------------------------|----------------|-------------------------------------------|--------------------|
| `.gitattributes` | 1.57 kB | Git attributes for managing LFS files. | Uploaded |
| `README.md` | 195 Bytes | Model overview or documentation. | Updated |
| `added_tokens.json` | 657 Bytes | Custom tokens for the tokenizer. | Uploaded |
| `config.json` | 859 Bytes | Model configuration file. | Uploaded |
| `generation_config.json` | 280 Bytes | Configuration for text generation. | Uploaded |
| `merges.txt` | 1.82 MB | Merge rules for byte-pair encoding (BPE). | Uploaded |
| `pytorch_model.bin` | 988 MB | Model weights (PyTorch format). | Uploaded (LFS) |
| `special_tokens_map.json` | 644 Bytes | Mapping for special tokens. | Uploaded |
| `tokenizer.json` | 11.4 MB | Full tokenizer configuration. | Uploaded (LFS) |
| `tokenizer_config.json` | 7.73 kB | Additional tokenizer settings. | Uploaded |
| `vocab.json` | 2.78 MB | Vocabulary for the tokenizer. | Uploaded |
### **Key Features:**
1. **Compact Size with Efficient Performance:**
The smaller parameter count (500M) ensures faster inference and reduced hardware requirements.
2. **Instruction Optimization:**
Fine-tuned to follow prompts effectively, making it suitable for interactive applications and prompt-based tasks.
3. **Domain-Specific Training:**
Trained on the **GAIR/o1-journey** dataset, providing tailored capabilities for specific use cases.
---
### **Training Details:**
- **Base Model:** [Qwen2.5-0.5B-Instruct](#)
- **Dataset Used for Fine-Tuning:** [GAIR/o1-journey](#)
- A compact dataset focusing on instruction-driven generation with 1.42k samples.
---
### **Capabilities:**
1. **Instruction Following:**
- Generates accurate and coherent responses to user instructions.
- Handles summarization, question-answering, and conversational tasks.
2. **Fast Inference:**
- Ideal for real-time applications due to reduced latency from its smaller size.
3. **Interactive AI Development:**
- Suitable for chatbots, virtual assistants, and instructional interfaces.
---
### **Usage Instructions:**
1. **Setup:**
Download all model files, ensuring compatibility with the Hugging Face Transformers library.
2. **Loading the Model:**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Acrux-500M-o1-Journey"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
3. **Sample Generate Text:**
```python
input_text = "Explain the concept of machine learning in simple terms."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
4. **Optimize Generation:**
Adjust parameters in `generation_config.json` for better control of output, such as:
- `temperature` for randomness.
- `top_p` for sampling diversity.
- `max_length` for output size.
---
|
{"base_model": ["Qwen/Qwen2.5-0.5B-Instruct"], "datasets": ["GAIR/o1-journey"], "language": ["en"], "library_name": "transformers", "license": "creativeml-openrail-m", "pipeline_tag": "text-generation", "tags": ["Qwen2.5", "Llama-Cpp", "CoT", "o1-journey", "text-generation-inference", "safetensors", "Ollama"]}
|
task
|
[
"SUMMARIZATION"
] | 43,506 |
AP7000/distilbert-base-uncased-finetuned-emotion
|
AP7000
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-12-13T21:19:24Z |
2023-12-13T22:21:20+00:00
| 12 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
widget:
- text: I love you.
example_title: Happy
- text: I hate you so much.
example_title: Anger
- text: Thank you for the gift, unexpected
example_title: Joy
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- type: accuracy
value: 0.926
name: Accuracy
- type: f1
value: 0.925811864898524
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2211
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8501 | 1.0 | 250 | 0.3206 | 0.903 | 0.8995 |
| 0.2561 | 2.0 | 500 | 0.2211 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu118
- Datasets 1.16.1
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2211
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8501 | 1.0 | 250 | 0.3206 | 0.903 | 0.8995 |
| 0.2561 | 2.0 | 500 | 0.2211 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu118
- Datasets 1.16.1
- Tokenizers 0.15.0
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "widget": [{"text": "I love you.", "example_title": "Happy"}, {"text": "I hate you so much.", "example_title": "Anger"}, {"text": "Thank you for the gift, unexpected", "example_title": "Joy"}], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.926, "name": "Accuracy"}, {"type": "f1", "value": 0.925811864898524, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,507 |
gaudi/opus-mt-war-en-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-17T00:17:57Z |
2024-10-18T22:56:27+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-war-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-war-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-war-en --output_dir ./ctranslate2/opus-mt-war-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-war-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-war-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-war-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-war-en) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-war-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-war-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-war-en --output_dir ./ctranslate2/opus-mt-war-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-war-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-war-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-war-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-war-en) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 43,508 |
leedheo/distilbert-base-uncased-finetuned-emotion
|
leedheo
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-04T00:48:22Z |
2023-08-04T00:55:56+00:00
| 10 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.8915
name: Accuracy
- type: f1
value: 0.8849396222880551
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3728
- Accuracy: 0.8915
- F1: 0.8849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5844 | 0.802 | 0.7613 |
| 0.7729 | 2.0 | 250 | 0.3728 | 0.8915 | 0.8849 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3728
- Accuracy: 0.8915
- F1: 0.8849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5844 | 0.802 | 0.7613 |
| 0.7729 | 2.0 | 250 | 0.3728 | 0.8915 | 0.8849 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.8915, "name": "Accuracy"}, {"type": "f1", "value": 0.8849396222880551, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,509 |
hltcoe/plaidx-large-rus-tdist-t53b-engeng
|
hltcoe
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"clir",
"colbertx",
"plaidx",
"xlm-roberta-large",
"en",
"ru",
"dataset:ms_marco",
"dataset:hltcoe/tdist-msmarco-scores",
"arxiv:2201.08471",
"arxiv:2401.04810",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-12-28T01:20:58Z |
2025-03-12T23:38:32+00:00
| 17 | 1 |
---
datasets:
- ms_marco
- hltcoe/tdist-msmarco-scores
language:
- en
- ru
license: mit
tags:
- clir
- colbertx
- plaidx
- xlm-roberta-large
task_categories:
- text-retrieval
- information-retrieval
task_ids:
- passage-retrieval
- cross-language-retrieval
---
# ColBERT-X for English-Russian CLIR using Translate-Distill
## CLIR Model Setting
- Query language: English
- Query length: 32 token max
- Document language: Russian
- Document length: 180 token max (please use MaxP to aggregate the passage score if needed)
## Model Description
Translate-Distill is a training technique that produces state-of-the-art CLIR dense retrieval model through translation and distillation.
`plaidx-large-rus-tdist-t53b-engeng` is trained with KL-Divergence from the t53b MonoT5 reranker inferenced on
English MS MARCO training queries and English passages.
### Teacher Models:
- `t53b`: [`castorini/monot5-3b-msmarco-10k`](https://huggingface.co/castorini/monot5-3b-msmarco-10k)
- `mt5xxl`: [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k)
### Training Parameters
- learning rate: 5e-6
- update steps: 200,000
- nway (number of passages per query): 6 (randomly selected from 50)
- per device batch size (number of query-passage set): 8
- training GPU: 8 NVIDIA V100 with 32 GB memory
## Usage
To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X.
```bash
pip install PLAID-X==0.3.1
```
Following code snippet loads the model through Huggingface API.
```python
from colbert.modeling.checkpoint import Checkpoint
from colbert.infra import ColBERTConfig
Checkpoint('hltcoe/plaidx-large-rus-tdist-t53b-engeng', colbert_config=ColBERTConfig())
```
For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb),
which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial).
## BibTeX entry and Citation Info
Please cite the following two papers if you use the model.
```bibtex
@inproceedings{colbert-x,
author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard},
title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models},
booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)},
year = {2022},
url = {https://arxiv.org/abs/2201.08471}
}
```
```bibtex
@inproceedings{translate-distill,
author = {Eugene Yang and Dawn Lawrie and James Mayfield and Douglas W. Oard and Scott Miller},
title = {Translate-Distill: Learning Cross-Language Dense Retrieval by Translation and Distillation},
booktitle = {Proceedings of the 46th European Conference on Information Retrieval (ECIR)},
year = {2024},
url = {https://arxiv.org/abs/2401.04810}
}
```
| null |
Non_BioNLP
|
# ColBERT-X for English-Russian CLIR using Translate-Distill
## CLIR Model Setting
- Query language: English
- Query length: 32 token max
- Document language: Russian
- Document length: 180 token max (please use MaxP to aggregate the passage score if needed)
## Model Description
Translate-Distill is a training technique that produces state-of-the-art CLIR dense retrieval model through translation and distillation.
`plaidx-large-rus-tdist-t53b-engeng` is trained with KL-Divergence from the t53b MonoT5 reranker inferenced on
English MS MARCO training queries and English passages.
### Teacher Models:
- `t53b`: [`castorini/monot5-3b-msmarco-10k`](https://huggingface.co/castorini/monot5-3b-msmarco-10k)
- `mt5xxl`: [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k)
### Training Parameters
- learning rate: 5e-6
- update steps: 200,000
- nway (number of passages per query): 6 (randomly selected from 50)
- per device batch size (number of query-passage set): 8
- training GPU: 8 NVIDIA V100 with 32 GB memory
## Usage
To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X.
```bash
pip install PLAID-X==0.3.1
```
Following code snippet loads the model through Huggingface API.
```python
from colbert.modeling.checkpoint import Checkpoint
from colbert.infra import ColBERTConfig
Checkpoint('hltcoe/plaidx-large-rus-tdist-t53b-engeng', colbert_config=ColBERTConfig())
```
For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb),
which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial).
## BibTeX entry and Citation Info
Please cite the following two papers if you use the model.
```bibtex
@inproceedings{colbert-x,
author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard},
title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models},
booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)},
year = {2022},
url = {https://arxiv.org/abs/2201.08471}
}
```
```bibtex
@inproceedings{translate-distill,
author = {Eugene Yang and Dawn Lawrie and James Mayfield and Douglas W. Oard and Scott Miller},
title = {Translate-Distill: Learning Cross-Language Dense Retrieval by Translation and Distillation},
booktitle = {Proceedings of the 46th European Conference on Information Retrieval (ECIR)},
year = {2024},
url = {https://arxiv.org/abs/2401.04810}
}
```
|
{"datasets": ["ms_marco", "hltcoe/tdist-msmarco-scores"], "language": ["en", "ru"], "license": "mit", "tags": ["clir", "colbertx", "plaidx", "xlm-roberta-large"], "task_categories": ["text-retrieval", "information-retrieval"], "task_ids": ["passage-retrieval", "cross-language-retrieval"]}
|
task
|
[
"TRANSLATION"
] | 43,510 |
gokuls/hbertv1-massive-intermediate_KD_new_2
|
gokuls
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:gokuls/HBERTv1_48_L10_H768_A12",
"base_model:finetune:gokuls/HBERTv1_48_L10_H768_A12",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-26T02:04:22Z |
2023-11-26T02:20:32+00:00
| 3 | 0 |
---
base_model: gokuls/HBERTv1_48_L10_H768_A12
datasets:
- massive
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: hbertv1-massive-intermediate_KD_new_2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- type: accuracy
value: 0.8342351205115592
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-massive-intermediate_KD_new_2
This model is a fine-tuned version of [gokuls/HBERTv1_48_L10_H768_A12](https://huggingface.co/gokuls/HBERTv1_48_L10_H768_A12) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3102
- Accuracy: 0.8342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.5836 | 1.0 | 180 | 3.4660 | 0.2710 |
| 3.38 | 2.0 | 360 | 2.7802 | 0.4324 |
| 2.7571 | 3.0 | 540 | 2.3906 | 0.5991 |
| 2.3743 | 4.0 | 720 | 2.1148 | 0.7029 |
| 2.1481 | 5.0 | 900 | 2.0007 | 0.7245 |
| 1.9762 | 6.0 | 1080 | 1.9660 | 0.7467 |
| 1.8702 | 7.0 | 1260 | 1.8680 | 0.7619 |
| 1.759 | 8.0 | 1440 | 1.8192 | 0.7806 |
| 1.6949 | 9.0 | 1620 | 1.7677 | 0.7949 |
| 1.6253 | 10.0 | 1800 | 1.7452 | 0.7885 |
| 1.5849 | 11.0 | 1980 | 1.7075 | 0.8023 |
| 1.5239 | 12.0 | 2160 | 1.6915 | 0.7939 |
| 1.4768 | 13.0 | 2340 | 1.6821 | 0.8067 |
| 1.4474 | 14.0 | 2520 | 1.7201 | 0.7944 |
| 1.424 | 15.0 | 2700 | 1.6538 | 0.8096 |
| 1.3839 | 16.0 | 2880 | 1.5979 | 0.8141 |
| 1.3537 | 17.0 | 3060 | 1.6254 | 0.8062 |
| 1.3422 | 18.0 | 3240 | 1.6386 | 0.8077 |
| 1.3166 | 19.0 | 3420 | 1.6048 | 0.8141 |
| 1.2923 | 20.0 | 3600 | 1.5927 | 0.8146 |
| 1.2722 | 21.0 | 3780 | 1.5544 | 0.8180 |
| 1.2513 | 22.0 | 3960 | 1.5904 | 0.8077 |
| 1.2286 | 23.0 | 4140 | 1.5506 | 0.8195 |
| 1.2056 | 24.0 | 4320 | 1.5547 | 0.8146 |
| 1.1941 | 25.0 | 4500 | 1.5258 | 0.8224 |
| 1.1701 | 26.0 | 4680 | 1.4975 | 0.8224 |
| 1.1582 | 27.0 | 4860 | 1.4945 | 0.8200 |
| 1.1367 | 28.0 | 5040 | 1.4888 | 0.8219 |
| 1.127 | 29.0 | 5220 | 1.4596 | 0.8254 |
| 1.1126 | 30.0 | 5400 | 1.4686 | 0.8175 |
| 1.0922 | 31.0 | 5580 | 1.4934 | 0.8200 |
| 1.0809 | 32.0 | 5760 | 1.4370 | 0.8249 |
| 1.0715 | 33.0 | 5940 | 1.4305 | 0.8234 |
| 1.0572 | 34.0 | 6120 | 1.4255 | 0.8273 |
| 1.0429 | 35.0 | 6300 | 1.4042 | 0.8249 |
| 1.0375 | 36.0 | 6480 | 1.4004 | 0.8190 |
| 1.0242 | 37.0 | 6660 | 1.3849 | 0.8269 |
| 1.0132 | 38.0 | 6840 | 1.3777 | 0.8288 |
| 1.0085 | 39.0 | 7020 | 1.3731 | 0.8273 |
| 0.9964 | 40.0 | 7200 | 1.3647 | 0.8278 |
| 0.9867 | 41.0 | 7380 | 1.3655 | 0.8239 |
| 0.9787 | 42.0 | 7560 | 1.3542 | 0.8293 |
| 0.9692 | 43.0 | 7740 | 1.3449 | 0.8278 |
| 0.9646 | 44.0 | 7920 | 1.3402 | 0.8283 |
| 0.959 | 45.0 | 8100 | 1.3360 | 0.8288 |
| 0.9482 | 46.0 | 8280 | 1.3289 | 0.8303 |
| 0.9503 | 47.0 | 8460 | 1.3173 | 0.8328 |
| 0.9428 | 48.0 | 8640 | 1.3152 | 0.8333 |
| 0.9416 | 49.0 | 8820 | 1.3102 | 0.8342 |
| 0.9348 | 50.0 | 9000 | 1.3133 | 0.8328 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-massive-intermediate_KD_new_2
This model is a fine-tuned version of [gokuls/HBERTv1_48_L10_H768_A12](https://huggingface.co/gokuls/HBERTv1_48_L10_H768_A12) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3102
- Accuracy: 0.8342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.5836 | 1.0 | 180 | 3.4660 | 0.2710 |
| 3.38 | 2.0 | 360 | 2.7802 | 0.4324 |
| 2.7571 | 3.0 | 540 | 2.3906 | 0.5991 |
| 2.3743 | 4.0 | 720 | 2.1148 | 0.7029 |
| 2.1481 | 5.0 | 900 | 2.0007 | 0.7245 |
| 1.9762 | 6.0 | 1080 | 1.9660 | 0.7467 |
| 1.8702 | 7.0 | 1260 | 1.8680 | 0.7619 |
| 1.759 | 8.0 | 1440 | 1.8192 | 0.7806 |
| 1.6949 | 9.0 | 1620 | 1.7677 | 0.7949 |
| 1.6253 | 10.0 | 1800 | 1.7452 | 0.7885 |
| 1.5849 | 11.0 | 1980 | 1.7075 | 0.8023 |
| 1.5239 | 12.0 | 2160 | 1.6915 | 0.7939 |
| 1.4768 | 13.0 | 2340 | 1.6821 | 0.8067 |
| 1.4474 | 14.0 | 2520 | 1.7201 | 0.7944 |
| 1.424 | 15.0 | 2700 | 1.6538 | 0.8096 |
| 1.3839 | 16.0 | 2880 | 1.5979 | 0.8141 |
| 1.3537 | 17.0 | 3060 | 1.6254 | 0.8062 |
| 1.3422 | 18.0 | 3240 | 1.6386 | 0.8077 |
| 1.3166 | 19.0 | 3420 | 1.6048 | 0.8141 |
| 1.2923 | 20.0 | 3600 | 1.5927 | 0.8146 |
| 1.2722 | 21.0 | 3780 | 1.5544 | 0.8180 |
| 1.2513 | 22.0 | 3960 | 1.5904 | 0.8077 |
| 1.2286 | 23.0 | 4140 | 1.5506 | 0.8195 |
| 1.2056 | 24.0 | 4320 | 1.5547 | 0.8146 |
| 1.1941 | 25.0 | 4500 | 1.5258 | 0.8224 |
| 1.1701 | 26.0 | 4680 | 1.4975 | 0.8224 |
| 1.1582 | 27.0 | 4860 | 1.4945 | 0.8200 |
| 1.1367 | 28.0 | 5040 | 1.4888 | 0.8219 |
| 1.127 | 29.0 | 5220 | 1.4596 | 0.8254 |
| 1.1126 | 30.0 | 5400 | 1.4686 | 0.8175 |
| 1.0922 | 31.0 | 5580 | 1.4934 | 0.8200 |
| 1.0809 | 32.0 | 5760 | 1.4370 | 0.8249 |
| 1.0715 | 33.0 | 5940 | 1.4305 | 0.8234 |
| 1.0572 | 34.0 | 6120 | 1.4255 | 0.8273 |
| 1.0429 | 35.0 | 6300 | 1.4042 | 0.8249 |
| 1.0375 | 36.0 | 6480 | 1.4004 | 0.8190 |
| 1.0242 | 37.0 | 6660 | 1.3849 | 0.8269 |
| 1.0132 | 38.0 | 6840 | 1.3777 | 0.8288 |
| 1.0085 | 39.0 | 7020 | 1.3731 | 0.8273 |
| 0.9964 | 40.0 | 7200 | 1.3647 | 0.8278 |
| 0.9867 | 41.0 | 7380 | 1.3655 | 0.8239 |
| 0.9787 | 42.0 | 7560 | 1.3542 | 0.8293 |
| 0.9692 | 43.0 | 7740 | 1.3449 | 0.8278 |
| 0.9646 | 44.0 | 7920 | 1.3402 | 0.8283 |
| 0.959 | 45.0 | 8100 | 1.3360 | 0.8288 |
| 0.9482 | 46.0 | 8280 | 1.3289 | 0.8303 |
| 0.9503 | 47.0 | 8460 | 1.3173 | 0.8328 |
| 0.9428 | 48.0 | 8640 | 1.3152 | 0.8333 |
| 0.9416 | 49.0 | 8820 | 1.3102 | 0.8342 |
| 0.9348 | 50.0 | 9000 | 1.3133 | 0.8328 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "gokuls/HBERTv1_48_L10_H768_A12", "datasets": ["massive"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hbertv1-massive-intermediate_KD_new_2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "massive", "type": "massive", "config": "en-US", "split": "validation", "args": "en-US"}, "metrics": [{"type": "accuracy", "value": 0.8342351205115592, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,511 |
hopkins/eng-fra-sim2b
|
hopkins
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-23T16:20:26Z |
2023-07-23T16:38:51+00:00
| 10 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-fra-sim2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-sim2b
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1587
- Bleu: 32.2739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-sim2b
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1587
- Bleu: 32.2739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-fra-sim2b", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 43,512 |
heshamourad/marian-finetuned-kde4-en-to-fr
|
heshamourad
|
translation
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-02-07T03:29:08Z |
2024-02-07T05:12:53+00:00
| 6 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- type: bleu
value: 52.930569776237235
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8552
- Bleu: 52.9306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8552
- Bleu: 52.9306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-fr", "split": "train", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.930569776237235, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 43,513 |
mav23/Atlas-Chat-2B-GGUF
|
mav23
|
text-generation
|
[
"transformers",
"gguf",
"conversational",
"text-generation",
"ar",
"dataset:MBZUAI-Paris/Darija-SFT-Mixture",
"arxiv:2308.16149",
"arxiv:2409.17912",
"base_model:google/gemma-2-2b-it",
"base_model:quantized:google/gemma-2-2b-it",
"license:gemma",
"endpoints_compatible",
"region:us"
] | 2024-10-07T02:55:46Z |
2024-10-07T03:05:25+00:00
| 28 | 0 |
---
base_model:
- google/gemma-2-2b-it
datasets:
- MBZUAI-Paris/Darija-SFT-Mixture
language:
- ar
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- conversational
extra_gated_button_content: Acknowledge license
---
# Atlas-Chat Model Card
## Model Overview
Atlas-Chat is a family of open models instruction-tuned for Darija, the colloquial Arabic of Morocco, developed as part of the [Jais](https://arxiv.org/abs/2308.16149) project for standard Arabic and its extentions to dialectal Arabic. These models are designed for language generation and excel in various applications such as question answering, summarization, and translation. Thanks to their compact size, Atlas-Chat models can be deployed in resource-constrained environments like laptops, desktops, or personal cloud setups, making advanced AI accessible to Darija speakers and promoting widespread innovation. Two versions are available:
* [Atlas-Chat-2B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B): A small-sized version with 2 billion parameters, capable of generating fluent Moroccan Darija text while maintaining efficiency.
* [Atlas-Chat-9B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B): A larger version with 9 billion parameters, providing more nuanced, contextually rich language generation for complex tasks.
The models are designed to assist with:
* Conversational agents and chatbots that operate in Darija.
* Translation, summarization, and content generation in informal dialect.
* Cultural research related to Morocco and its language.
**Paper:** [Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect](https://arxiv.org/abs/2409.17912)
## 👥 Our Team
The model is developed by MBZUAI France Lab, an AI research center in Paris affiliated with the [Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)](https://mbzuai.ac.ae/) headquartered in Abu Dhabi.
## Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers sentencepiece
```
Then, copy the snippet from the section that is relevant for your use case.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="MBZUAI-Paris/Atlas-Chat-9B",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda" # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": 'شكون لي صنعك؟'},
]
outputs = pipe(messages, max_new_tokens=256, temperature=0.0)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
```
- Response:
>صنعاتني جامعة محمد بن زايد للذكاء الاصطناعي، لي هي جامعة بحثية ديال الدراسات العليا الهدف ديالها أنها تزيد بالذكاء الاصطناعي لقدّام وتنفع بيه الإنسانية. يمكن ليك تزور https://mbzuai.ac.ae/ar/about/ باش تعرف كثر على جامعة محمد بن زايد للذكاء الاصطناعي والمهمة ديالها!
#### Running the model on a single / multi GPU
```sh
pip install accelerate
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "user", "content": "شنو كيتسمى المنتخب المغربي ؟"},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, , add_generation_prompt=True)
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
- Response:
>المنتخب المغربي كيتسمى أيضا "أسود الأطلس"
<!-- You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "user", "content": "شنو هيا الإيجابيات ديال الطاقة المتجددة؟"},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, add_generation_prompt=True)
outputs = model.generate(**input_ids, max_new_tokens=256, temperature=0.0)
print(tokenizer.decode(outputs[0]))
```
- Response:
```text
<bos><start_of_turn>user
شنو هيا الإيجابيات ديال الطاقة المتجددة؟<end_of_turn>
<start_of_turn>model
الطاقة المتجددة عندها بزاف ديال الإيجابيات، منها:
1. الاستدامة: مصادر الطاقة المتجددة بحال الريح، الشمس، والطاقة الكهرومائية كيتجددو بشكل طبيعي، يعني ما غاديش ينفدو مع الوقت. هاد الشي كيخليهم مصدر طاقة مستدام اللي ممكن نعتمدو عليه على المدى الطويل.
2. تقليل انبعاثات الكربون: مصادر الطاقة المتجددة عموماً عندها انبعاثات كربونية أقل من الوقود الأحفوري، وهاد الشي كيساعد فالتخفيف من التغير المناخي وتقليل تلوث الهواء.
3. الاستقلال الطاقي: مصادر الطاقة المتجددة ممكن نستعملوها باش نقللو من الاعتماد على الوقود الأحفوري المستورد، وهاد الشي كيزيد من الاستقلال الطاقي وكيقلل من خطر التقطيع.
4. خلق فرص الشغل: صناعة الطاقة المتجددة كتخلق فرص شغل فمجالات بحال تركيب الألواح الشمسية، صيانة توربينات الرياح، وبناء محطات
``` -->
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```sh
pip install bitsandbytes accelerate
```
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
)
text = f"""
شرح ليا هاد الهضرة:
في القرن 19 لقاو الذّهب في كاليفورنيا، ناضو لّي كيبيعو العتلة والفاس كيقنعو الناس بلي غيديرو لاباس يلا قلبو على الذهب... فالأخير اغتنى تجار أدوات التنقيب والحفر. وحاليا كاين لّي كيقنع الأخرين بلي هو مليونير، وعندو الوقت يورّي للآخرين كيفاش يديرو لاباس.
"""
messages = [
{"role": "user", "content": text},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1])
```
- Response:
>هاد الهضرة كتهضر على قصة قديمة من القرن 19 فين تكتشف الذهب فكاليفورنيا. هاد الشي خلق حالة ديال الجنون على الذهب، فين بزاف ديال الناس مشاو لتما باش يقلبو عليه. كانو حتى ناس اللي كانو كيبيعو أدوات التنقيب بحال الفاس والعتلة، وكانو كيقنعو الناس بلي غادي يربحو الفلوس إلا مشاو يقلبو على الذهب. فالنهاية، هادوك اللي كانو كيبيعو هاد الأدوات هوما اللي ربحو بزاف، حيت كانو كيربحو من كل واحد اللي كان كيشري منهم.
>
>هاد القصة كتشبه للي كاينة دابا، فين كاينين ناس اللي كيدعيو بلي هوما مليونير وكيبيعو نصائح على كيفاش تربح الفلوس. بحال هادوك اللي كانو كيبيعو الأدوات فالماضي، حتى هاد الناس كيربحو من هاد الشي، حيت كياخدو الفلوس من الناس اللي كيشريو منهم النصائح ديالهم.
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
)
text = f"""ترجم للدارجة:
Atlas Chat is the first open source large language model that talks in Darija.
"""
messages = [
{"role": "user", "content": text},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, add_generation_prompt=True)
outputs = model.generate(**input_ids, max_new_tokens=256, temperature=0.0)
print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1])
```
- Response:
>أطلّاس شات هو أول نموذج لغوي كبير مفتوح المصدر كايهضر بالدارجة.
</details>
### Chat Template
The models use a chat template that must be adhered to conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "أشنو كايمييز المملكة المغربية." },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
أشنو كايمييز المملكة المغربية.<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
```
- Response:
>المغرب كايميزو بزاف ديال الحوايج، منهم:
>
>1. التنوع الثقافي: المغرب بلاد فيها بزاف ديال الثقافات، كل وحدة فيهم عندها التقاليد ديالها واللغة ديالها والماكلة ديالها. هاد التنوع كايبان فالموسيقى والرقص والفنون التقليدية.
>
>2. التراث التاريخي: المغرب عندو تاريخ غني كايمتد لآلاف السنين، فيه حضارات قديمة بحال مملكة موريطانيا، والرومان، والبيزنطيين، والفتوحات الإسلامية. هاد التراث كايبان فالمعالم التاريخية بحال مدينة فاس، والمدينة القديمة ديال مراكش، والمدينة القديمة ديال شفشاون.
>
>3. المناظر الطبيعية: المغرب بلاد فيها مناظر طبيعية متنوعة، من السواحل الزرقة والصحاري الكبيرة، للجبال العالية والوديان الخضراء. هاد التنوع كايمكنك من ممارسة أنشطة خارجية بحال المشي لمسافات طويلة، والتخييم، والرياضات المائية.
>
>4. الماكلة: الماكلة المغربية معروفة بالتنوع ديالها والطعم ديالها. من بين الأطباق الأكثر شعبية كاين الطاجين، والكسكس، والبريوات، والكوكتيل ديال الفواكه.
>
>5. الناس: المغاربة معروفين بالضيافة ديالهم والترحاب ديالهم. كايكونو فرحانين باش يشاركو الثقافة والتقاليد ديالهم مع الزوار.
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated Darija text in response to the input, such
as an answer to a question, or a summary of a document.
### Chatbot interface using Ollama
You can also use Ollama and chatbot-ollama to create a chatbot user-interface to better test the model.
First you need to install Ollama on your machine from [here](https://github.com/ollama/ollama) and have node.js installed as well. Then, download and prepare the model as follows:
```bash
huggingface-cli download MBZUAI-Paris/Atlas-Chat-9B --local-dir Atlas-Chat-9B/
ollama create Atlas-Chat-9B -f Atlas-Chat-9B/modelfile
ollama serve
```
Finally, in a new terminal clone chatbot-ollama repository from Github and run it:
```bash
git clone https://github.com/ivanfioravanti/chatbot-ollama.git
cd chatbot-ollama
npm ci
npm run dev
```
You can start chatting with the model by visiting http://localhost:3000.
### Citation
If you use Atlas-Chat in your research, please cite our paper:
```none
@article{shang2024atlaschatadaptinglargelanguage,
title={Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect},
author={Guokan Shang and Hadi Abdine and Yousef Khoubrane and Amr Mohamed and Yassine Abbahaddou and Sofiane Ennadir and Imane Momayiz and Xuguang Ren and Eric Moulines and Preslav Nakov and Michalis Vazirgiannis and Eric Xing},
year={2024},
eprint={2409.17912},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.17912},
}
```
## Training Data
The model was trained on diverse datasets focusing on Darija consisting for approximatley 450k instructions of a maximum length of 2048 tokens, including:
* Synthetic instructions created to guide the model in processing various types of language tasks tailord towards Moroccan culture.
* Instruction samples created from publicly available Moroccan Arabic datasets including translation, summarization and sentiment analysis.
* Translated English and multi-lingual instruction-tuning datasets.
Our training dataset [Darija-SFT-Mixture](https://huggingface.co/datasets/MBZUAI-Paris/Darija-SFT-Mixture) is publicly available.
## Implementation Information
Atlas-Chat models are based on Gemma 2 models. The Atlas-Chat models were trained using 8 Nvidia's A100 80 GB GPUs in parallel using FSDP on AWS Sagemaker. The model is trained using HuggingFace transformers and parameter-efficient fine-tuning with LoRA rank of 256.
## Evaluation
The Atlas-Chat models were evaluated on a comprehensive suite of tasks using various datasets and benchmarks to assess their performance across multiple dimensions. These included tasks such as:
* **DarijaMMLU:** A Darija version of ArabicMMLU and MMLU benchmarks translated from MSA and English respectively.
* **DarijaHellaSwag:** A Darija version of HellaSwag.
* **Belebele Ary_Arab:** Belebele is a multiple-choice machine reading comprehension dataset published by Facebook spanning 122 language variants. The Evaluation is done on the Ary_Arab part of Belebele that refers to Darija.
* **Sentiment Analysis.**
* **Translation:** Including six directions and four languages: Darija, MSA, English and French.
* **Summarization.**
The models were compared against a collection of existing open-source Arabic models to gauge their effectiveness, with a particular focus on performance in Darija. All scores are based on zero-shot performance. The prompts are written mainly in Darija. The metric used for DarijaMMLU, DarijaHellaSwag, Belebele Ary and Sentiment Analysis is the normalized accuracy. We used [Language Model Evaluation Harness](https://github.com/MBZUAI-Paris/lm-evaluation-harness-atlas-chat) to conduct these evaluations.
<table>
<tr>
<td rowspan="2">Model</td>
<td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaMMLU" target="_blank">DarijaMMLU</a></td>
<td rowspan="2"><a href="MBZUAI-Paris/DarijaHellaSwag" target="_blank">DarijaHellaSwag</a></td>
<td rowspan="2"><a href="https://huggingface.co/datasets/facebook/belebele/viewer/ary_Arab" target="_blank">Belebele Ary</a></td>
<td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">Sentiment Analysis</a></td>
<td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">DoDa-10k (Translation)</a></td>
<td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">MArSum (Summarization)</a><br/>(LLM as a judge)</td>
</tr>
<tr>
<td>BLEU</td>
<td>chrF</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-family-1p3b-chat" target="_blank">jais-family-1p3b-chat</a></td>
<td>35.39</td>
<td>32.51</td>
<td>38.33</td>
<td>45.29</td>
<td>00.13</td>
<td>06.18</td>
<td>00.50</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-family-2p7b-chat" target="_blank">jais-family-2p7b-chat</a></td>
<td>37.44</td>
<td>34.49</td>
<td>44.11</td>
<td>51.56</td>
<td>00.25</td>
<td>07.46</td>
<td>00.90</td>
</tr>
<tr>
<td><a href="https://huggingface.co/google/gemma-2-2b-it" target="_blank">gemma-2-2b-it</a></td>
<td>28.58</td>
<td>32.42</td>
<td>25.22</td>
<td>53.36</td>
<td>00.10</td>
<td>04.96</td>
<td>06.80</td>
</tr>
<tr>
<td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B" target="_blank">Atlas-Chat-2B</a></strong></td>
<td><b>44.97</td>
<td><b>41.48</td>
<td><b>53.89</td>
<td><b>73.99</td>
<td><b>22.76</td>
<td><b>44.86</td>
<td><b>55.22</td>
</tr>
<tr style="border-top: 4px solid;"></tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-family-6p7b-chat" target="_blank">jais-family-6p7b-chat</a></td>
<td>39.96</td>
<td>41.57</td>
<td>51.22</td>
<td>56.78</td>
<td>00.73</td>
<td>11.85</td>
<td>03.02</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-adapted-7b-chat" target="_blank">jais-adapted-7b-chat</a></td>
<td>39.30</td>
<td>35.19</td>
<td>43.67</td>
<td>52.72</td>
<td>00.60</td>
<td>09.43</td>
<td>02.82</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-family-13b-chat" target="_blank">jais-family-13b-chat</a></td>
<td>45.11</td>
<td>43.90</td>
<td>58.67</td>
<td>41.73</td>
<td>00.92</td>
<td>11.71</td>
<td>01.77</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-adapted-13b-chat" target="_blank">jais-adapted-13b-chat</a></td>
<td>45.20</td>
<td>40.65</td>
<td>49.67</td>
<td>66.68</td>
<td>00.87</td>
<td>10.52</td>
<td>01.92</td>
</tr>
<tr>
<td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-7B-chat" target="_blank">AceGPT-7b-chat</a></td>
<td>35.98</td>
<td>36.57</td>
<td>30.11</td>
<td>40.23</td>
<td>00.44</td>
<td>11.33</td>
<td>02.28</td>
</tr>
<tr>
<td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-13B-chat" target="_blank">AceGPT-13b-chat</a></td>
<td>41.09</td>
<td>38.35</td>
<td>33.11</td>
<td>59.58</td>
<td>00.98</td>
<td>16.70</td>
<td>02.80</td>
</tr>
<tr>
<td><a href="https://huggingface.co/google/gemma-2-9b-it" target="_blank">gemma-2-9b-it</a></td>
<td>35.91</td>
<td>42.43</td>
<td>31.00</td>
<td>59.87</td>
<td>03.10</td>
<td>19.16</td>
<td>13.81</td>
</tr>
<tr>
<td><a href="meta-llama/Meta-Llama-3.1-8B-Instruct" target="_blank">Llama-3.1-8B-Instruct</a></td>
<td>44.13</td>
<td>38.24</td>
<td>47.00</td>
<td>44.08</td>
<td>00.92</td>
<td>14.19</td>
<td>01.28</td>
</tr>
<tr>
<td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B" target="_blank">Atlas-Chat-9B</a></strong></td>
<td><b>58.23</td>
<td><b>57.75</td>
<td><b>74.56</td>
<td><b>81.89</td>
<td><b>28.08</td>
<td><b>50.48</td>
<td><b>59.76</td>
</tr>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
<details>
<summary>Intended Usage</summary>
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
</details>
<details>
<summary>Limitations</summary>
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
</details>
<details>
<summary> Ethical Considerations and Risks</summary>
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
</details>
## Acknowledgement
We would like to express our gratitude to the following institutions for their contributions to this work: École Polytechnique, LINAGORA and KTH Royal Institute of Technology. Additionally, we extend our thanks to the AtlasIA community.
| null |
Non_BioNLP
|
# Atlas-Chat Model Card
## Model Overview
Atlas-Chat is a family of open models instruction-tuned for Darija, the colloquial Arabic of Morocco, developed as part of the [Jais](https://arxiv.org/abs/2308.16149) project for standard Arabic and its extentions to dialectal Arabic. These models are designed for language generation and excel in various applications such as question answering, summarization, and translation. Thanks to their compact size, Atlas-Chat models can be deployed in resource-constrained environments like laptops, desktops, or personal cloud setups, making advanced AI accessible to Darija speakers and promoting widespread innovation. Two versions are available:
* [Atlas-Chat-2B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B): A small-sized version with 2 billion parameters, capable of generating fluent Moroccan Darija text while maintaining efficiency.
* [Atlas-Chat-9B](https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B): A larger version with 9 billion parameters, providing more nuanced, contextually rich language generation for complex tasks.
The models are designed to assist with:
* Conversational agents and chatbots that operate in Darija.
* Translation, summarization, and content generation in informal dialect.
* Cultural research related to Morocco and its language.
**Paper:** [Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect](https://arxiv.org/abs/2409.17912)
## 👥 Our Team
The model is developed by MBZUAI France Lab, an AI research center in Paris affiliated with the [Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)](https://mbzuai.ac.ae/) headquartered in Abu Dhabi.
## Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers sentencepiece
```
Then, copy the snippet from the section that is relevant for your use case.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="MBZUAI-Paris/Atlas-Chat-9B",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda" # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": 'شكون لي صنعك؟'},
]
outputs = pipe(messages, max_new_tokens=256, temperature=0.0)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
```
- Response:
>صنعاتني جامعة محمد بن زايد للذكاء الاصطناعي، لي هي جامعة بحثية ديال الدراسات العليا الهدف ديالها أنها تزيد بالذكاء الاصطناعي لقدّام وتنفع بيه الإنسانية. يمكن ليك تزور https://mbzuai.ac.ae/ar/about/ باش تعرف كثر على جامعة محمد بن زايد للذكاء الاصطناعي والمهمة ديالها!
#### Running the model on a single / multi GPU
```sh
pip install accelerate
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "user", "content": "شنو كيتسمى المنتخب المغربي ؟"},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, , add_generation_prompt=True)
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
- Response:
>المنتخب المغربي كيتسمى أيضا "أسود الأطلس"
<!-- You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "user", "content": "شنو هيا الإيجابيات ديال الطاقة المتجددة؟"},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, add_generation_prompt=True)
outputs = model.generate(**input_ids, max_new_tokens=256, temperature=0.0)
print(tokenizer.decode(outputs[0]))
```
- Response:
```text
<bos><start_of_turn>user
شنو هيا الإيجابيات ديال الطاقة المتجددة؟<end_of_turn>
<start_of_turn>model
الطاقة المتجددة عندها بزاف ديال الإيجابيات، منها:
1. الاستدامة: مصادر الطاقة المتجددة بحال الريح، الشمس، والطاقة الكهرومائية كيتجددو بشكل طبيعي، يعني ما غاديش ينفدو مع الوقت. هاد الشي كيخليهم مصدر طاقة مستدام اللي ممكن نعتمدو عليه على المدى الطويل.
2. تقليل انبعاثات الكربون: مصادر الطاقة المتجددة عموماً عندها انبعاثات كربونية أقل من الوقود الأحفوري، وهاد الشي كيساعد فالتخفيف من التغير المناخي وتقليل تلوث الهواء.
3. الاستقلال الطاقي: مصادر الطاقة المتجددة ممكن نستعملوها باش نقللو من الاعتماد على الوقود الأحفوري المستورد، وهاد الشي كيزيد من الاستقلال الطاقي وكيقلل من خطر التقطيع.
4. خلق فرص الشغل: صناعة الطاقة المتجددة كتخلق فرص شغل فمجالات بحال تركيب الألواح الشمسية، صيانة توربينات الرياح، وبناء محطات
``` -->
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```sh
pip install bitsandbytes accelerate
```
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
)
text = f"""
شرح ليا هاد الهضرة:
في القرن 19 لقاو الذّهب في كاليفورنيا، ناضو لّي كيبيعو العتلة والفاس كيقنعو الناس بلي غيديرو لاباس يلا قلبو على الذهب... فالأخير اغتنى تجار أدوات التنقيب والحفر. وحاليا كاين لّي كيقنع الأخرين بلي هو مليونير، وعندو الوقت يورّي للآخرين كيفاش يديرو لاباس.
"""
messages = [
{"role": "user", "content": text},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1])
```
- Response:
>هاد الهضرة كتهضر على قصة قديمة من القرن 19 فين تكتشف الذهب فكاليفورنيا. هاد الشي خلق حالة ديال الجنون على الذهب، فين بزاف ديال الناس مشاو لتما باش يقلبو عليه. كانو حتى ناس اللي كانو كيبيعو أدوات التنقيب بحال الفاس والعتلة، وكانو كيقنعو الناس بلي غادي يربحو الفلوس إلا مشاو يقلبو على الذهب. فالنهاية، هادوك اللي كانو كيبيعو هاد الأدوات هوما اللي ربحو بزاف، حيت كانو كيربحو من كل واحد اللي كان كيشري منهم.
>
>هاد القصة كتشبه للي كاينة دابا، فين كاينين ناس اللي كيدعيو بلي هوما مليونير وكيبيعو نصائح على كيفاش تربح الفلوس. بحال هادوك اللي كانو كيبيعو الأدوات فالماضي، حتى هاد الناس كيربحو من هاد الشي، حيت كياخدو الفلوس من الناس اللي كيشريو منهم النصائح ديالهم.
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
)
text = f"""ترجم للدارجة:
Atlas Chat is the first open source large language model that talks in Darija.
"""
messages = [
{"role": "user", "content": text},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True, add_generation_prompt=True)
outputs = model.generate(**input_ids, max_new_tokens=256, temperature=0.0)
print(tokenizer.decode(outputs[0]).split("<start_of_turn>model")[-1])
```
- Response:
>أطلّاس شات هو أول نموذج لغوي كبير مفتوح المصدر كايهضر بالدارجة.
</details>
### Chat Template
The models use a chat template that must be adhered to conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "MBZUAI-Paris/Atlas-Chat-9B"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "أشنو كايمييز المملكة المغربية." },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
أشنو كايمييز المملكة المغربية.<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
```
- Response:
>المغرب كايميزو بزاف ديال الحوايج، منهم:
>
>1. التنوع الثقافي: المغرب بلاد فيها بزاف ديال الثقافات، كل وحدة فيهم عندها التقاليد ديالها واللغة ديالها والماكلة ديالها. هاد التنوع كايبان فالموسيقى والرقص والفنون التقليدية.
>
>2. التراث التاريخي: المغرب عندو تاريخ غني كايمتد لآلاف السنين، فيه حضارات قديمة بحال مملكة موريطانيا، والرومان، والبيزنطيين، والفتوحات الإسلامية. هاد التراث كايبان فالمعالم التاريخية بحال مدينة فاس، والمدينة القديمة ديال مراكش، والمدينة القديمة ديال شفشاون.
>
>3. المناظر الطبيعية: المغرب بلاد فيها مناظر طبيعية متنوعة، من السواحل الزرقة والصحاري الكبيرة، للجبال العالية والوديان الخضراء. هاد التنوع كايمكنك من ممارسة أنشطة خارجية بحال المشي لمسافات طويلة، والتخييم، والرياضات المائية.
>
>4. الماكلة: الماكلة المغربية معروفة بالتنوع ديالها والطعم ديالها. من بين الأطباق الأكثر شعبية كاين الطاجين، والكسكس، والبريوات، والكوكتيل ديال الفواكه.
>
>5. الناس: المغاربة معروفين بالضيافة ديالهم والترحاب ديالهم. كايكونو فرحانين باش يشاركو الثقافة والتقاليد ديالهم مع الزوار.
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated Darija text in response to the input, such
as an answer to a question, or a summary of a document.
### Chatbot interface using Ollama
You can also use Ollama and chatbot-ollama to create a chatbot user-interface to better test the model.
First you need to install Ollama on your machine from [here](https://github.com/ollama/ollama) and have node.js installed as well. Then, download and prepare the model as follows:
```bash
huggingface-cli download MBZUAI-Paris/Atlas-Chat-9B --local-dir Atlas-Chat-9B/
ollama create Atlas-Chat-9B -f Atlas-Chat-9B/modelfile
ollama serve
```
Finally, in a new terminal clone chatbot-ollama repository from Github and run it:
```bash
git clone https://github.com/ivanfioravanti/chatbot-ollama.git
cd chatbot-ollama
npm ci
npm run dev
```
You can start chatting with the model by visiting http://localhost:3000.
### Citation
If you use Atlas-Chat in your research, please cite our paper:
```none
@article{shang2024atlaschatadaptinglargelanguage,
title={Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect},
author={Guokan Shang and Hadi Abdine and Yousef Khoubrane and Amr Mohamed and Yassine Abbahaddou and Sofiane Ennadir and Imane Momayiz and Xuguang Ren and Eric Moulines and Preslav Nakov and Michalis Vazirgiannis and Eric Xing},
year={2024},
eprint={2409.17912},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.17912},
}
```
## Training Data
The model was trained on diverse datasets focusing on Darija consisting for approximatley 450k instructions of a maximum length of 2048 tokens, including:
* Synthetic instructions created to guide the model in processing various types of language tasks tailord towards Moroccan culture.
* Instruction samples created from publicly available Moroccan Arabic datasets including translation, summarization and sentiment analysis.
* Translated English and multi-lingual instruction-tuning datasets.
Our training dataset [Darija-SFT-Mixture](https://huggingface.co/datasets/MBZUAI-Paris/Darija-SFT-Mixture) is publicly available.
## Implementation Information
Atlas-Chat models are based on Gemma 2 models. The Atlas-Chat models were trained using 8 Nvidia's A100 80 GB GPUs in parallel using FSDP on AWS Sagemaker. The model is trained using HuggingFace transformers and parameter-efficient fine-tuning with LoRA rank of 256.
## Evaluation
The Atlas-Chat models were evaluated on a comprehensive suite of tasks using various datasets and benchmarks to assess their performance across multiple dimensions. These included tasks such as:
* **DarijaMMLU:** A Darija version of ArabicMMLU and MMLU benchmarks translated from MSA and English respectively.
* **DarijaHellaSwag:** A Darija version of HellaSwag.
* **Belebele Ary_Arab:** Belebele is a multiple-choice machine reading comprehension dataset published by Facebook spanning 122 language variants. The Evaluation is done on the Ary_Arab part of Belebele that refers to Darija.
* **Sentiment Analysis.**
* **Translation:** Including six directions and four languages: Darija, MSA, English and French.
* **Summarization.**
The models were compared against a collection of existing open-source Arabic models to gauge their effectiveness, with a particular focus on performance in Darija. All scores are based on zero-shot performance. The prompts are written mainly in Darija. The metric used for DarijaMMLU, DarijaHellaSwag, Belebele Ary and Sentiment Analysis is the normalized accuracy. We used [Language Model Evaluation Harness](https://github.com/MBZUAI-Paris/lm-evaluation-harness-atlas-chat) to conduct these evaluations.
<table>
<tr>
<td rowspan="2">Model</td>
<td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaMMLU" target="_blank">DarijaMMLU</a></td>
<td rowspan="2"><a href="MBZUAI-Paris/DarijaHellaSwag" target="_blank">DarijaHellaSwag</a></td>
<td rowspan="2"><a href="https://huggingface.co/datasets/facebook/belebele/viewer/ary_Arab" target="_blank">Belebele Ary</a></td>
<td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">Sentiment Analysis</a></td>
<td colspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">DoDa-10k (Translation)</a></td>
<td rowspan="2"><a href="https://huggingface.co/datasets/MBZUAI-Paris/DarijaBench" target="_blank">MArSum (Summarization)</a><br/>(LLM as a judge)</td>
</tr>
<tr>
<td>BLEU</td>
<td>chrF</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-family-1p3b-chat" target="_blank">jais-family-1p3b-chat</a></td>
<td>35.39</td>
<td>32.51</td>
<td>38.33</td>
<td>45.29</td>
<td>00.13</td>
<td>06.18</td>
<td>00.50</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-family-2p7b-chat" target="_blank">jais-family-2p7b-chat</a></td>
<td>37.44</td>
<td>34.49</td>
<td>44.11</td>
<td>51.56</td>
<td>00.25</td>
<td>07.46</td>
<td>00.90</td>
</tr>
<tr>
<td><a href="https://huggingface.co/google/gemma-2-2b-it" target="_blank">gemma-2-2b-it</a></td>
<td>28.58</td>
<td>32.42</td>
<td>25.22</td>
<td>53.36</td>
<td>00.10</td>
<td>04.96</td>
<td>06.80</td>
</tr>
<tr>
<td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-2B" target="_blank">Atlas-Chat-2B</a></strong></td>
<td><b>44.97</td>
<td><b>41.48</td>
<td><b>53.89</td>
<td><b>73.99</td>
<td><b>22.76</td>
<td><b>44.86</td>
<td><b>55.22</td>
</tr>
<tr style="border-top: 4px solid;"></tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-family-6p7b-chat" target="_blank">jais-family-6p7b-chat</a></td>
<td>39.96</td>
<td>41.57</td>
<td>51.22</td>
<td>56.78</td>
<td>00.73</td>
<td>11.85</td>
<td>03.02</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-adapted-7b-chat" target="_blank">jais-adapted-7b-chat</a></td>
<td>39.30</td>
<td>35.19</td>
<td>43.67</td>
<td>52.72</td>
<td>00.60</td>
<td>09.43</td>
<td>02.82</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-family-13b-chat" target="_blank">jais-family-13b-chat</a></td>
<td>45.11</td>
<td>43.90</td>
<td>58.67</td>
<td>41.73</td>
<td>00.92</td>
<td>11.71</td>
<td>01.77</td>
</tr>
<tr>
<td><a href="https://huggingface.co/inceptionai/jais-adapted-13b-chat" target="_blank">jais-adapted-13b-chat</a></td>
<td>45.20</td>
<td>40.65</td>
<td>49.67</td>
<td>66.68</td>
<td>00.87</td>
<td>10.52</td>
<td>01.92</td>
</tr>
<tr>
<td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-7B-chat" target="_blank">AceGPT-7b-chat</a></td>
<td>35.98</td>
<td>36.57</td>
<td>30.11</td>
<td>40.23</td>
<td>00.44</td>
<td>11.33</td>
<td>02.28</td>
</tr>
<tr>
<td><a href="https://huggingface.co/FreedomIntelligence/AceGPT-13B-chat" target="_blank">AceGPT-13b-chat</a></td>
<td>41.09</td>
<td>38.35</td>
<td>33.11</td>
<td>59.58</td>
<td>00.98</td>
<td>16.70</td>
<td>02.80</td>
</tr>
<tr>
<td><a href="https://huggingface.co/google/gemma-2-9b-it" target="_blank">gemma-2-9b-it</a></td>
<td>35.91</td>
<td>42.43</td>
<td>31.00</td>
<td>59.87</td>
<td>03.10</td>
<td>19.16</td>
<td>13.81</td>
</tr>
<tr>
<td><a href="meta-llama/Meta-Llama-3.1-8B-Instruct" target="_blank">Llama-3.1-8B-Instruct</a></td>
<td>44.13</td>
<td>38.24</td>
<td>47.00</td>
<td>44.08</td>
<td>00.92</td>
<td>14.19</td>
<td>01.28</td>
</tr>
<tr>
<td><strong><a href="https://huggingface.co/MBZUAI-Paris/Atlas-Chat-9B" target="_blank">Atlas-Chat-9B</a></strong></td>
<td><b>58.23</td>
<td><b>57.75</td>
<td><b>74.56</td>
<td><b>81.89</td>
<td><b>28.08</td>
<td><b>50.48</td>
<td><b>59.76</td>
</tr>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
<details>
<summary>Intended Usage</summary>
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
</details>
<details>
<summary>Limitations</summary>
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
</details>
<details>
<summary> Ethical Considerations and Risks</summary>
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
</details>
## Acknowledgement
We would like to express our gratitude to the following institutions for their contributions to this work: École Polytechnique, LINAGORA and KTH Royal Institute of Technology. Additionally, we extend our thanks to the AtlasIA community.
|
{"base_model": ["google/gemma-2-2b-it"], "datasets": ["MBZUAI-Paris/Darija-SFT-Mixture"], "language": ["ar"], "library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "tags": ["conversational"], "extra_gated_button_content": "Acknowledge license"}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | 43,514 |
adeep028/bert-fine-tuned-cola
|
adeep028
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-11T17:44:14Z |
2023-09-11T18:10:54+00:00
| 11 | 0 |
---
base_model: bert-base-cased
datasets:
- glue
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: bert-fine-tuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- type: matthews_correlation
value: 0.6118771035334829
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7565
- Matthews Correlation: 0.6119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4374 | 1.0 | 1069 | 0.4163 | 0.5558 |
| 0.3114 | 2.0 | 2138 | 0.6548 | 0.6006 |
| 0.1875 | 3.0 | 3207 | 0.7565 | 0.6119 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7565
- Matthews Correlation: 0.6119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4374 | 1.0 | 1069 | 0.4163 | 0.5558 |
| 0.3114 | 2.0 | 2138 | 0.6548 | 0.6006 |
| 0.1875 | 3.0 | 3207 | 0.7565 | 0.6119 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
{"base_model": "bert-base-cased", "datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-fine-tuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.6118771035334829, "name": "Matthews Correlation"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,515 |
bullerwins/Hermes-2-Theta-Llama-3-70B-exl2_4.0bpw
|
bullerwins
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"distillation",
"synthetic data",
"function calling",
"structured outputs",
"json mode",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | 2024-06-21T13:45:27Z |
2024-06-21T13:58:53+00:00
| 8 | 2 |
---
language:
- en
license: llama3
pipeline_tag: text-generation
tags:
- distillation
- synthetic data
- function calling
- structured outputs
- json mode
---
Quantized to exl2 using [Exllamav2 0.1.5](https://github.com/turboderp/exllamav2)
Original model [NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B)
# Hermes 2 Theta Llama-3 70B Model Card

## Model Description
Hermes-2 Θ (Theta) 70B is the continuation of our experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard and [Arcee AI](https://www.arcee.ai/), the team behind [MergeKit](https://github.com/arcee-ai/mergekit).
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
# Comparison to Llama-3 70B Instruct Model:

# Example Outputs
## System Prompt: Roleplay as an Anime Catgirl who's good at programming and a hacker.

## Providing Structured Outputs for Annotating LLM Training Data

## System Prompt: Roleplay as a bombastic alchemist from the 17th century in France. You are on a journey with the user to find the philosopher's stone.

# Prompt Format & Capabilities
Hermes 2 Θ uses ChatML as the prompt format, opening up a very structured and steerable, multiturn system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where uniquely added tokens were added to denote the beginning and end of any turn, along with roles for the turns.
The model is also specifically trained for Function Calling, Structured Outputs with JSON, and Feature Extraction from RAG Documents (see below).
## Conversational Chats
Here is an example prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
## Function Calling Format
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
<tools>
{"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}}
</tools>
Use the following pydantic model json schema for each tool call you will make:
{"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"}
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function.
See example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:
<schema>
{schema}
</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmark Details
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6638|_ |0.0138|
| | |acc_norm|0.6903|_ |0.0135|
|arc_easy | 0|acc |0.8851|_ |0.0065|
| | |acc_norm|0.8712|_ |0.0069|
|boolq | 1|acc |0.8820|_ |0.0056|
|hellaswag | 0|acc |0.6579|_ |0.0047|
| | |acc_norm|0.8432|_ |0.0036|
|openbookqa | 0|acc |0.3920|_ |0.0219|
| | |acc_norm|0.4740|_ |0.0224|
|piqa | 0|acc |0.8286|_ |0.0088|
| | |acc_norm|0.8351|_ |0.0087|
|winogrande | 0|acc |0.7893|_ |0.0115|
```
Average: 76.93
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.4055|_ |0.0309|
| | |acc_norm|0.4094|_ |0.0309|
|agieval_logiqa_en | 0|acc |0.5100|_ |0.0196|
| | |acc_norm|0.5023|_ |0.0196|
|agieval_lsat_ar | 0|acc |0.2783|_ |0.0296|
| | |acc_norm|0.2957|_ |0.0302|
|agieval_lsat_lr | 0|acc |0.7451|_ |0.0193|
| | |acc_norm|0.7333|_ |0.0196|
|agieval_lsat_rc | 0|acc |0.8290|_ |0.0230|
| | |acc_norm|0.8104|_ |0.0239|
|agieval_sat_en | 0|acc |0.9029|_ |0.0207|
| | |acc_norm|0.9029|_ |0.0207|
|agieval_sat_en_without_passage| 0|acc |0.5825|_ |0.0344|
| | |acc_norm|0.5631|_ |0.0346|
|agieval_sat_math | 0|acc |0.6318|_ |0.0326|
| | |acc_norm|0.6227|_ |0.0328|
```
Average: 60.50
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6737|_ |0.0341|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7724|_ |0.0219|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3256|_ |0.0292|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4763|_ |0.0264|
| | |exact_str_match |0.0000|_ |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.4720|_ |0.0223|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.3486|_ |0.0180|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.6367|_ |0.0278|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.5220|_ |0.0224|
|bigbench_navigate | 0|multiple_choice_grade|0.5930|_ |0.0155|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.8600|_ |0.0078|
|bigbench_ruin_names | 0|multiple_choice_grade|0.7411|_ |0.0207|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.5281|_ |0.0158|
|bigbench_snarks | 0|multiple_choice_grade|0.6961|_ |0.0343|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5751|_ |0.0158|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.9880|_ |0.0034|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2296|_ |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1691|_ |0.0090|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.6367|_ |0.0278|
```
Average: 56.91
## TruthfulQA:
```| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4565|_ |0.0174|
| | |mc2 |0.6288|_ |0.0151|
```
62.88
## IFEval:
**87.99**
## MTBench:
First Turn - **9.1625**
Second Turn - **8.925**
Average - **9.04375**
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-70B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-70B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-70B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B][NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-70B))},
title={Hermes-2-Theta-Llama-3-70B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
| null |
Non_BioNLP
|
Quantized to exl2 using [Exllamav2 0.1.5](https://github.com/turboderp/exllamav2)
Original model [NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B)
# Hermes 2 Theta Llama-3 70B Model Card

## Model Description
Hermes-2 Θ (Theta) 70B is the continuation of our experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard and [Arcee AI](https://www.arcee.ai/), the team behind [MergeKit](https://github.com/arcee-ai/mergekit).
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
# Comparison to Llama-3 70B Instruct Model:

# Example Outputs
## System Prompt: Roleplay as an Anime Catgirl who's good at programming and a hacker.

## Providing Structured Outputs for Annotating LLM Training Data

## System Prompt: Roleplay as a bombastic alchemist from the 17th century in France. You are on a journey with the user to find the philosopher's stone.

# Prompt Format & Capabilities
Hermes 2 Θ uses ChatML as the prompt format, opening up a very structured and steerable, multiturn system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where uniquely added tokens were added to denote the beginning and end of any turn, along with roles for the turns.
The model is also specifically trained for Function Calling, Structured Outputs with JSON, and Feature Extraction from RAG Documents (see below).
## Conversational Chats
Here is an example prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
## Function Calling Format
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
<tools>
{"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}}
</tools>
Use the following pydantic model json schema for each tool call you will make:
{"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"}
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function.
See example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:
<schema>
{schema}
</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmark Details
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6638|_ |0.0138|
| | |acc_norm|0.6903|_ |0.0135|
|arc_easy | 0|acc |0.8851|_ |0.0065|
| | |acc_norm|0.8712|_ |0.0069|
|boolq | 1|acc |0.8820|_ |0.0056|
|hellaswag | 0|acc |0.6579|_ |0.0047|
| | |acc_norm|0.8432|_ |0.0036|
|openbookqa | 0|acc |0.3920|_ |0.0219|
| | |acc_norm|0.4740|_ |0.0224|
|piqa | 0|acc |0.8286|_ |0.0088|
| | |acc_norm|0.8351|_ |0.0087|
|winogrande | 0|acc |0.7893|_ |0.0115|
```
Average: 76.93
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.4055|_ |0.0309|
| | |acc_norm|0.4094|_ |0.0309|
|agieval_logiqa_en | 0|acc |0.5100|_ |0.0196|
| | |acc_norm|0.5023|_ |0.0196|
|agieval_lsat_ar | 0|acc |0.2783|_ |0.0296|
| | |acc_norm|0.2957|_ |0.0302|
|agieval_lsat_lr | 0|acc |0.7451|_ |0.0193|
| | |acc_norm|0.7333|_ |0.0196|
|agieval_lsat_rc | 0|acc |0.8290|_ |0.0230|
| | |acc_norm|0.8104|_ |0.0239|
|agieval_sat_en | 0|acc |0.9029|_ |0.0207|
| | |acc_norm|0.9029|_ |0.0207|
|agieval_sat_en_without_passage| 0|acc |0.5825|_ |0.0344|
| | |acc_norm|0.5631|_ |0.0346|
|agieval_sat_math | 0|acc |0.6318|_ |0.0326|
| | |acc_norm|0.6227|_ |0.0328|
```
Average: 60.50
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6737|_ |0.0341|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7724|_ |0.0219|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3256|_ |0.0292|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4763|_ |0.0264|
| | |exact_str_match |0.0000|_ |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.4720|_ |0.0223|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.3486|_ |0.0180|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.6367|_ |0.0278|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.5220|_ |0.0224|
|bigbench_navigate | 0|multiple_choice_grade|0.5930|_ |0.0155|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.8600|_ |0.0078|
|bigbench_ruin_names | 0|multiple_choice_grade|0.7411|_ |0.0207|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.5281|_ |0.0158|
|bigbench_snarks | 0|multiple_choice_grade|0.6961|_ |0.0343|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5751|_ |0.0158|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.9880|_ |0.0034|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2296|_ |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1691|_ |0.0090|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.6367|_ |0.0278|
```
Average: 56.91
## TruthfulQA:
```| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4565|_ |0.0174|
| | |mc2 |0.6288|_ |0.0151|
```
62.88
## IFEval:
**87.99**
## MTBench:
First Turn - **9.1625**
Second Turn - **8.925**
Average - **9.04375**
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-70B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-70B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-70B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B][NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-70B))},
title={Hermes-2-Theta-Llama-3-70B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
|
{"language": ["en"], "license": "llama3", "pipeline_tag": "text-generation", "tags": ["distillation", "synthetic data", "function calling", "structured outputs", "json mode"]}
|
task
|
[
"TRANSLATION"
] | 43,516 |
Supabase/bge-small-en
|
Supabase
|
feature-extraction
|
[
"transformers.js",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"en",
"license:mit",
"region:us"
] | 2023-08-08T20:56:28Z |
2024-03-18T18:03:59+00:00
| 154 | 1 |
---
language:
- en
library_name: transformers.js
license: mit
pipeline_tag: feature-extraction
---
_Fork of https://huggingface.co/BAAI/bge-small-en with ONNX weights to be compatible with Transformers.js. See [JavaScript usage](#javascript)._
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#license">License</a>
<p>
</h4>
For more details please refer to our GitHub repo: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | Description | query instruction for retrieval |
|:-------------------------------|:--------:| :--------:| :--------:|
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | rank **2nd** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | Chinese | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-en-noinstruct](https://huggingface.co/BAAI/bge-small-en-noinstruct) | Chinese | This model is trained without instruction, and rank **2nd** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | a base-scale model but has similar ability with `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
## Usage
This model can be used with both [Python](#python) and [JavaScript](#javascript).
### Python
#### Use with [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md)
```
pip install -U FlagEmbedding
```
See [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences = ["样例数据-1", "样例数据-2"]
model = FlagModel('Supabase/bge-small-en', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:")
embeddings = model.encode(sentences)
print(embeddings)
# for retrieval task, please use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus()
queries = ['query_1', 'query_2']
passages = ["样例段落-1", "样例段落-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
The value of argument `query_instruction_for_retrieval` see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
FlagModel will use all available GPUs when encoding, please set `os.environ["CUDA_VISIBLE_DEVICES"]` to choose GPU.
#### Use with [sentence-transformers](https://www.sbert.net/)
Using this model also is easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences = ["样例数据-1", "样例数据-2"]
model = SentenceTransformer('Supabase/bge-small-en')
embeddings = model.encode(sentences, normalize_embeddings=True)
print(embeddings)
```
For retrieval task,
each query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
```python
from sentence_transformers import SentenceTransformer
queries = ["手机开不了机怎么办?"]
passages = ["样例段落-1", "样例段落-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('Supabase/bge-small-en')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Use with [Transformers](https://huggingface.co/docs/transformers/index) and [PyTorch](https://pytorch.org/docs/stable/index.html)
With transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Supabase/bge-small-en')
model = AutoModel.from_pretrained('Supabase/bge-small-en')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for retrieval task, add an instruction to query
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### JavaScript
This model can be used with JavaScript via [Transformers.js](https://huggingface.co/docs/transformers.js/index).
#### Use with [Deno](https://deno.land/manual/introduction) or [Supabase Edge Functions](https://supabase.com/docs/guides/functions)
```ts
import { serve } from 'https://deno.land/[email protected]/http/server.ts'
import { env, pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]'
// Configuration for Deno runtime
env.useBrowserCache = false;
env.allowLocalModels = false;
const pipe = await pipeline(
'feature-extraction',
'Supabase/bge-small-en',
);
serve(async (req) => {
// Extract input string from JSON body
const { input } = await req.json();
// Generate the embedding from the user input
const output = await pipe(input, {
pooling: 'mean',
normalize: true,
});
// Extract the embedding output
const embedding = Array.from(output.data);
// Return the embedding
return new Response(
JSON.stringify({ embedding }),
{ headers: { 'Content-Type': 'application/json' } }
);
});
```
#### Use within the browser ([JavaScript Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules))
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]';
const pipe = await pipeline(
'feature-extraction',
'Supabase/bge-small-en',
);
// Generate the embedding from text
const output = await pipe('Hello world', {
pooling: 'mean',
normalize: true,
});
// Extract the embedding output
const embedding = Array.from(output.data);
console.log(embedding);
</script>
```
#### Use within [Node.js](https://nodejs.org/en/docs) or a web bundler ([Webpack](https://webpack.js.org/concepts/), etc)
```js
import { pipeline } from '@xenova/transformers';
const pipe = await pipeline(
'feature-extraction',
'Supabase/bge-small-en',
);
// Generate the embedding from text
const output = await pipe('Hello world', {
pooling: 'mean',
normalize: true,
});
// Extract the embedding output
const embedding = Array.from(output.data);
console.log(embedding);
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
More details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**bge-large-en**](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | **63.98** | **53.9** | **46.98** | 85.8 | **59.48** | 81.56 | 32.06 | **76.21** |
| [**bge-base-en**](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [**bge-small-en**](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [bge-small-en](https://huggingface.co/thenlper/bge-small-en) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 384 | 512 | 56.53 | 42.69 | 41.81 | 82.41 | 58.44 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 384 | 512 | 56.26 | 41.95 | 42.35 | 82.37 | 58.04 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 768 | 512 | 56.00 | 41.88 | 41.1 | 82.54 | 53.14 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 768 | 512 | 55.27 | 33.63 | 40.21 | 85.18 | 53.09 | 81.14 | 31.39 | 69.81 |
- **C-MTEB**:
We create a benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**bge-large-zh**](https://huggingface.co/BAAI/bge-small-en) | 1024 | **64.20** | **71.53** | **53.23** | **78.94** | 72.26 | **65.11** | 48.39 |
| [**bge-large-zh-noinstruct**](https://huggingface.co/BAAI/bge-small-en-noinstruct) | 1024 | 63.53 | 70.55 | 50.98 | 76.77 | **72.49** | 64.91 | **50.01** |
| [**BAAI/bge-base-zh**](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 52.05 | 77.5 | 70.98 | 64.91 | 47.63 |
| [**BAAI/bge-small-zh**](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 46.87 | 70.35 | 67.78 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 |56.91 | 48.15 | 63.99 | 70.28 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 |54.75 | 48.64 | 64.3 | 71.22 | 59.66 | 48.88 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 40.61 | 69.56 | 67.38 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 39.41 | 66.62 | 65.29 | 49.25 | 44.39 |
| [text2vec](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 41.71 | 67.41 | 65.18 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 41.98 | 70.86 | 63.42 | 49.16 | 30.02 |
## Train
This section will introduce the way we used to train the general embedding.
The training scripts are in [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md),
and we provide some examples to do [pre-train](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/pretrain/README.md) and [fine-tune](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/finetune/README.md).
**1. RetroMAE Pre-train**
We pre-train the model following the method [retromae](https://github.com/staoxiao/RetroMAE),
which shows promising improvement in retrieval task ([paper](https://aclanthology.org/2022.emnlp-main.35.pdf)).
The pre-training was conducted on 24 A100(40G) GPUs with a batch size of 720.
In retromae, the mask ratio of encoder and decoder are 0.3, and 0.5 respectively.
We used the AdamW optimizer and the learning rate is 2e-5.
**Pre-training data**:
- English:
- [Pile](https://pile.eleuther.ai/)
- [wikipedia](https://huggingface.co/datasets/wikipedia)
- [msmarco](https://huggingface.co/datasets/Tevatron/msmarco-passage-corpus)
- Chinese:
- Subset of [wudao](https://github.com/BAAI-WuDao/Data)
- [baidu-baike](https://baike.baidu.com/)
**2. Finetune**
We fine-tune the model using a contrastive objective.
The format of input data is a triple`(query, positive, negative)`.
Besides the negative in the triple, we also adopt in-batch negatives strategy.
We employ the cross-device negatives sharing method to share negatives among different GPUs,
which can dramatically **increase the number of negatives**.
We trained our model on 48 A100(40G) GPUs with a large batch size of 32,768 (so there are **65,535** negatives for each query in a batch).
We used the AdamW optimizer and the learning rate is 1e-5.
The temperature for contrastive loss is 0.01.
For the version with `*-instrcution`, we add instruction to the query for retrieval task in the training.
For english, the instruction is `Represent this sentence for searching relevant passages: `;
For chinese, the instruction is `为这个句子生成表示以用于检索相关文章:`.
In the evaluation, the instruction should be added for sentence to passages retrieval task, not be added for other tasks.
The finetune script is accessible in this repository: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
You can easily finetune your model with it.
**Training data**:
- For English, we collect 230M text pairs from [wikipedia](https://huggingface.co/datasets/wikipedia), [cc-net](https://github.com/facebookresearch/cc_net), and so on.
- For chinese, we collect 120M text pairs from [wudao](https://github.com/BAAI-WuDao/Data), [simclue](https://github.com/CLUEbenchmark/SimCLUE) and so on.
**The data collection is to be released in the future.**
We will continually update the embedding models and training codes,
hoping to promote the development of the embedding model community.
## License
FlagEmbedding is licensed under [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
| null |
Non_BioNLP
|
_Fork of https://huggingface.co/BAAI/bge-small-en with ONNX weights to be compatible with Transformers.js. See [JavaScript usage](#javascript)._
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#license">License</a>
<p>
</h4>
For more details please refer to our GitHub repo: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | Description | query instruction for retrieval |
|:-------------------------------|:--------:| :--------:| :--------:|
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | rank **2nd** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | Chinese | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-en-noinstruct](https://huggingface.co/BAAI/bge-small-en-noinstruct) | Chinese | This model is trained without instruction, and rank **2nd** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | a base-scale model but has similar ability with `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
## Usage
This model can be used with both [Python](#python) and [JavaScript](#javascript).
### Python
#### Use with [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md)
```
pip install -U FlagEmbedding
```
See [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences = ["样例数据-1", "样例数据-2"]
model = FlagModel('Supabase/bge-small-en', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:")
embeddings = model.encode(sentences)
print(embeddings)
# for retrieval task, please use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus()
queries = ['query_1', 'query_2']
passages = ["样例段落-1", "样例段落-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
The value of argument `query_instruction_for_retrieval` see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
FlagModel will use all available GPUs when encoding, please set `os.environ["CUDA_VISIBLE_DEVICES"]` to choose GPU.
#### Use with [sentence-transformers](https://www.sbert.net/)
Using this model also is easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences = ["样例数据-1", "样例数据-2"]
model = SentenceTransformer('Supabase/bge-small-en')
embeddings = model.encode(sentences, normalize_embeddings=True)
print(embeddings)
```
For retrieval task,
each query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
```python
from sentence_transformers import SentenceTransformer
queries = ["手机开不了机怎么办?"]
passages = ["样例段落-1", "样例段落-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('Supabase/bge-small-en')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Use with [Transformers](https://huggingface.co/docs/transformers/index) and [PyTorch](https://pytorch.org/docs/stable/index.html)
With transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Supabase/bge-small-en')
model = AutoModel.from_pretrained('Supabase/bge-small-en')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for retrieval task, add an instruction to query
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### JavaScript
This model can be used with JavaScript via [Transformers.js](https://huggingface.co/docs/transformers.js/index).
#### Use with [Deno](https://deno.land/manual/introduction) or [Supabase Edge Functions](https://supabase.com/docs/guides/functions)
```ts
import { serve } from 'https://deno.land/[email protected]/http/server.ts'
import { env, pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]'
// Configuration for Deno runtime
env.useBrowserCache = false;
env.allowLocalModels = false;
const pipe = await pipeline(
'feature-extraction',
'Supabase/bge-small-en',
);
serve(async (req) => {
// Extract input string from JSON body
const { input } = await req.json();
// Generate the embedding from the user input
const output = await pipe(input, {
pooling: 'mean',
normalize: true,
});
// Extract the embedding output
const embedding = Array.from(output.data);
// Return the embedding
return new Response(
JSON.stringify({ embedding }),
{ headers: { 'Content-Type': 'application/json' } }
);
});
```
#### Use within the browser ([JavaScript Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules))
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]';
const pipe = await pipeline(
'feature-extraction',
'Supabase/bge-small-en',
);
// Generate the embedding from text
const output = await pipe('Hello world', {
pooling: 'mean',
normalize: true,
});
// Extract the embedding output
const embedding = Array.from(output.data);
console.log(embedding);
</script>
```
#### Use within [Node.js](https://nodejs.org/en/docs) or a web bundler ([Webpack](https://webpack.js.org/concepts/), etc)
```js
import { pipeline } from '@xenova/transformers';
const pipe = await pipeline(
'feature-extraction',
'Supabase/bge-small-en',
);
// Generate the embedding from text
const output = await pipe('Hello world', {
pooling: 'mean',
normalize: true,
});
// Extract the embedding output
const embedding = Array.from(output.data);
console.log(embedding);
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
More details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**bge-large-en**](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | **63.98** | **53.9** | **46.98** | 85.8 | **59.48** | 81.56 | 32.06 | **76.21** |
| [**bge-base-en**](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [**bge-small-en**](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [bge-small-en](https://huggingface.co/thenlper/bge-small-en) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 384 | 512 | 56.53 | 42.69 | 41.81 | 82.41 | 58.44 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 384 | 512 | 56.26 | 41.95 | 42.35 | 82.37 | 58.04 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 768 | 512 | 56.00 | 41.88 | 41.1 | 82.54 | 53.14 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 768 | 512 | 55.27 | 33.63 | 40.21 | 85.18 | 53.09 | 81.14 | 31.39 | 69.81 |
- **C-MTEB**:
We create a benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**bge-large-zh**](https://huggingface.co/BAAI/bge-small-en) | 1024 | **64.20** | **71.53** | **53.23** | **78.94** | 72.26 | **65.11** | 48.39 |
| [**bge-large-zh-noinstruct**](https://huggingface.co/BAAI/bge-small-en-noinstruct) | 1024 | 63.53 | 70.55 | 50.98 | 76.77 | **72.49** | 64.91 | **50.01** |
| [**BAAI/bge-base-zh**](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 52.05 | 77.5 | 70.98 | 64.91 | 47.63 |
| [**BAAI/bge-small-zh**](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 46.87 | 70.35 | 67.78 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 |56.91 | 48.15 | 63.99 | 70.28 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 |54.75 | 48.64 | 64.3 | 71.22 | 59.66 | 48.88 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 40.61 | 69.56 | 67.38 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 39.41 | 66.62 | 65.29 | 49.25 | 44.39 |
| [text2vec](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 41.71 | 67.41 | 65.18 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 41.98 | 70.86 | 63.42 | 49.16 | 30.02 |
## Train
This section will introduce the way we used to train the general embedding.
The training scripts are in [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md),
and we provide some examples to do [pre-train](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/pretrain/README.md) and [fine-tune](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/finetune/README.md).
**1. RetroMAE Pre-train**
We pre-train the model following the method [retromae](https://github.com/staoxiao/RetroMAE),
which shows promising improvement in retrieval task ([paper](https://aclanthology.org/2022.emnlp-main.35.pdf)).
The pre-training was conducted on 24 A100(40G) GPUs with a batch size of 720.
In retromae, the mask ratio of encoder and decoder are 0.3, and 0.5 respectively.
We used the AdamW optimizer and the learning rate is 2e-5.
**Pre-training data**:
- English:
- [Pile](https://pile.eleuther.ai/)
- [wikipedia](https://huggingface.co/datasets/wikipedia)
- [msmarco](https://huggingface.co/datasets/Tevatron/msmarco-passage-corpus)
- Chinese:
- Subset of [wudao](https://github.com/BAAI-WuDao/Data)
- [baidu-baike](https://baike.baidu.com/)
**2. Finetune**
We fine-tune the model using a contrastive objective.
The format of input data is a triple`(query, positive, negative)`.
Besides the negative in the triple, we also adopt in-batch negatives strategy.
We employ the cross-device negatives sharing method to share negatives among different GPUs,
which can dramatically **increase the number of negatives**.
We trained our model on 48 A100(40G) GPUs with a large batch size of 32,768 (so there are **65,535** negatives for each query in a batch).
We used the AdamW optimizer and the learning rate is 1e-5.
The temperature for contrastive loss is 0.01.
For the version with `*-instrcution`, we add instruction to the query for retrieval task in the training.
For english, the instruction is `Represent this sentence for searching relevant passages: `;
For chinese, the instruction is `为这个句子生成表示以用于检索相关文章:`.
In the evaluation, the instruction should be added for sentence to passages retrieval task, not be added for other tasks.
The finetune script is accessible in this repository: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
You can easily finetune your model with it.
**Training data**:
- For English, we collect 230M text pairs from [wikipedia](https://huggingface.co/datasets/wikipedia), [cc-net](https://github.com/facebookresearch/cc_net), and so on.
- For chinese, we collect 120M text pairs from [wudao](https://github.com/BAAI-WuDao/Data), [simclue](https://github.com/CLUEbenchmark/SimCLUE) and so on.
**The data collection is to be released in the future.**
We will continually update the embedding models and training codes,
hoping to promote the development of the embedding model community.
## License
FlagEmbedding is licensed under [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
{"language": ["en"], "library_name": "transformers.js", "license": "mit", "pipeline_tag": "feature-extraction"}
|
task
|
[
"SUMMARIZATION"
] | 43,517 |
heegyu/1223-Synatra-Yi-Ko-6B-mt-en2ko-2e-5
|
heegyu
|
text-generation
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dataset:squarelike/sharegpt_deepl_ko_translation",
"dataset:heegyu/glaive-function-calling-v2-ko-mt",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-12-25T09:52:13Z |
2023-12-29T09:39:30+00:00
| 11 | 1 |
---
datasets:
- squarelike/sharegpt_deepl_ko_translation
- heegyu/glaive-function-calling-v2-ko-mt
license: cc-by-sa-4.0
---
- [maywell/Synatra-Yi-Ko-6B](https://huggingface.co/maywell/Synatra-Yi-Ko-6B) 모델에 LoRA로 번역 데이터를 학습 후 가중치를 병합한 모델
## 프롬프트 및 생성 예시
````
<|im_start|>system
사용자가 입력한 마크다운 영어 대화를 한국어로 번역하세요. 프로그램 코드는 번역하면 안됩니다. <|im_end|>
<|im_start|>user
### User:
Can you write a python Hello World example?
### Assistant:
Certainly! Here's a simple "Hello, World!" example in Python:
```python
print("Hello, World!")
```
To run this code, you can follow these steps:
1. Open a text editor or integrated development environment (IDE) on your computer.
2. Type the code above into a new file.
3. Save the file with a ".py" extension, for example, "hello.py".
4. Open your command prompt or terminal.
5. Navigate to the directory where you saved the "hello.py" file.
6. Run the Python script by entering the following command:
```bash
python hello.py
```
You should see the output "Hello, World!" displayed in the terminal. This is a basic example to get you started with Python programming.
<|im_start|>assistant
### User:
````
생성 결과
````
파이썬 Hello World 예제를 작성해 주세요?
### Assistant:
물론이죠! 여기 파이썬의 간단한 "Hello, World!" 예제가 있습니다:
```python
print("Hello, World!")
```
이 코드를 실행하려면 다음 단계를 따르세요:
1. 컴퓨터에서 텍스트 편집기 또는 통합 개발 환경(IDE)을 엽니다.
2. 위의 코드를 새 파일에 입력합니다.
3. 예를 들어 "hello.py"인 파일 이름 끝에 ".py" 확장자를 지정하여 파일을 저장합니다.
4. 명령 프롬프트 또는 터미널을 엽니다.
5. "hello.py" 파일이 저장된 디렉토리 마우스 오른쪽 상단 모서리로 이동합니다.
6. 다음 명령을 입력하여 Python 스크립트를 실행합니다:
```bash
python hello.py
```
터미널에 "Hello, World!"라는 출력이 표시됩니다. 이것은 파이썬 프로그래밍을 시작하는 데 도움이 되는 기본적인 예제입니다.<|endoftext|>
````
| null |
Non_BioNLP
|
- [maywell/Synatra-Yi-Ko-6B](https://huggingface.co/maywell/Synatra-Yi-Ko-6B) 모델에 LoRA로 번역 데이터를 학습 후 가중치를 병합한 모델
## 프롬프트 및 생성 예시
````
<|im_start|>system
사용자가 입력한 마크다운 영어 대화를 한국어로 번역하세요. 프로그램 코드는 번역하면 안됩니다. <|im_end|>
<|im_start|>user
### User:
Can you write a python Hello World example?
### Assistant:
Certainly! Here's a simple "Hello, World!" example in Python:
```python
print("Hello, World!")
```
To run this code, you can follow these steps:
1. Open a text editor or integrated development environment (IDE) on your computer.
2. Type the code above into a new file.
3. Save the file with a ".py" extension, for example, "hello.py".
4. Open your command prompt or terminal.
5. Navigate to the directory where you saved the "hello.py" file.
6. Run the Python script by entering the following command:
```bash
python hello.py
```
You should see the output "Hello, World!" displayed in the terminal. This is a basic example to get you started with Python programming.
<|im_start|>assistant
### User:
````
생성 결과
````
파이썬 Hello World 예제를 작성해 주세요?
### Assistant:
물론이죠! 여기 파이썬의 간단한 "Hello, World!" 예제가 있습니다:
```python
print("Hello, World!")
```
이 코드를 실행하려면 다음 단계를 따르세요:
1. 컴퓨터에서 텍스트 편집기 또는 통합 개발 환경(IDE)을 엽니다.
2. 위의 코드를 새 파일에 입력합니다.
3. 예를 들어 "hello.py"인 파일 이름 끝에 ".py" 확장자를 지정하여 파일을 저장합니다.
4. 명령 프롬프트 또는 터미널을 엽니다.
5. "hello.py" 파일이 저장된 디렉토리 마우스 오른쪽 상단 모서리로 이동합니다.
6. 다음 명령을 입력하여 Python 스크립트를 실행합니다:
```bash
python hello.py
```
터미널에 "Hello, World!"라는 출력이 표시됩니다. 이것은 파이썬 프로그래밍을 시작하는 데 도움이 되는 기본적인 예제입니다.<|endoftext|>
````
|
{"datasets": ["squarelike/sharegpt_deepl_ko_translation", "heegyu/glaive-function-calling-v2-ko-mt"], "license": "cc-by-sa-4.0"}
|
task
|
[
"TRANSLATION"
] | 43,518 |
gaudi/opus-mt-en-crs-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-18T14:57:53Z |
2024-10-19T00:07:42+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-crs)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-crs).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-crs --output_dir ./ctranslate2/opus-mt-en-crs-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-crs-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-crs-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-crs-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-crs) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-crs)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-crs).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-crs --output_dir ./ctranslate2/opus-mt-en-crs-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-crs-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-crs-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-crs-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-crs) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 43,519 |
aiola/whisper-ner-v1
|
aiola
|
automatic-speech-recognition
|
[
"safetensors",
"whisper",
"asr",
"Automatic Speech Recognition",
"Whisper",
"Named entity recognition",
"automatic-speech-recognition",
"en",
"dataset:numind/NuNER",
"arxiv:2409.08107",
"license:mit",
"region:us"
] | 2024-09-23T06:38:18Z |
2024-11-21T19:08:38+00:00
| 154 | 22 |
---
datasets:
- numind/NuNER
language:
- en
license: mit
pipeline_tag: automatic-speech-recognition
tags:
- asr
- Automatic Speech Recognition
- Whisper
- Named entity recognition
---
# Whisper-NER
- Demo: https://huggingface.co/spaces/aiola/whisper-ner-v1
- Peper: [_WhisperNER: Unified Open Named Entity and Speech Recognition_](https://arxiv.org/abs/2409.08107).
- Code: https://github.com/aiola-lab/whisper-ner
We introduce WhisperNER, a novel model that allows joint speech transcription and entity recognition.
WhisperNER supports open-type NER, enabling recognition of diverse and evolving entities at inference.
The WhisperNER model is designed as a strong base model for the downstream task of ASR with NER, and can be fine-tuned on specific datasets for improved performance.
---------
## Training Details
`aiola/whisper-ner-v1` was trained on the NuNER dataset to perform joint audio transcription and NER tagging.
The model was trained and evaluated only on English data. Check out the [paper](https://arxiv.org/abs/2409.08107) for full details.
---------
## Usage
Inference can be done using the following code (for inference code and more details check out the [whisper-ner repo](https://github.com/aiola-lab/whisper-ner)).:
```python
import torch
from transformers import WhisperProcessor, WhisperForConditionalGeneration
model_path = "aiola/whisper-ner-v1"
audio_file_path = "path/to/audio/file"
prompt = "person, company, location" # comma separated entity tags
# load model and processor from pre-trained
processor = WhisperProcessor.from_pretrained(model_path)
model = WhisperForConditionalGeneration.from_pretrained(model_path)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
# load audio file: user is responsible for loading the audio files themselves
target_sample_rate = 16000
signal, sampling_rate = torchaudio.load(audio_file_path)
resampler = torchaudio.transforms.Resample(sampling_rate, target_sample_rate)
signal = resampler(signal)
# convert to mono or remove first dim if needed
if signal.ndim == 2:
signal = torch.mean(signal, dim=0)
# pre-process to get the input features
input_features = processor(
signal, sampling_rate=target_sample_rate, return_tensors="pt"
).input_features
input_features = input_features.to(device)
prompt_ids = processor.get_prompt_ids(prompt.lower(), return_tensors="pt")
prompt_ids = prompt_ids.to(device)
# generate token ids by running model forward sequentially
with torch.no_grad():
predicted_ids = model.generate(
input_features,
prompt_ids=prompt_ids,
generation_config=model.generation_config,
language="en",
)
# post-process token ids to text, remove prompt
transcription = processor.batch_decode(
predicted_ids, skip_special_tokens=True
)[0]
print(transcription)
```
| null |
Non_BioNLP
|
# Whisper-NER
- Demo: https://huggingface.co/spaces/aiola/whisper-ner-v1
- Peper: [_WhisperNER: Unified Open Named Entity and Speech Recognition_](https://arxiv.org/abs/2409.08107).
- Code: https://github.com/aiola-lab/whisper-ner
We introduce WhisperNER, a novel model that allows joint speech transcription and entity recognition.
WhisperNER supports open-type NER, enabling recognition of diverse and evolving entities at inference.
The WhisperNER model is designed as a strong base model for the downstream task of ASR with NER, and can be fine-tuned on specific datasets for improved performance.
---------
## Training Details
`aiola/whisper-ner-v1` was trained on the NuNER dataset to perform joint audio transcription and NER tagging.
The model was trained and evaluated only on English data. Check out the [paper](https://arxiv.org/abs/2409.08107) for full details.
---------
## Usage
Inference can be done using the following code (for inference code and more details check out the [whisper-ner repo](https://github.com/aiola-lab/whisper-ner)).:
```python
import torch
from transformers import WhisperProcessor, WhisperForConditionalGeneration
model_path = "aiola/whisper-ner-v1"
audio_file_path = "path/to/audio/file"
prompt = "person, company, location" # comma separated entity tags
# load model and processor from pre-trained
processor = WhisperProcessor.from_pretrained(model_path)
model = WhisperForConditionalGeneration.from_pretrained(model_path)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
# load audio file: user is responsible for loading the audio files themselves
target_sample_rate = 16000
signal, sampling_rate = torchaudio.load(audio_file_path)
resampler = torchaudio.transforms.Resample(sampling_rate, target_sample_rate)
signal = resampler(signal)
# convert to mono or remove first dim if needed
if signal.ndim == 2:
signal = torch.mean(signal, dim=0)
# pre-process to get the input features
input_features = processor(
signal, sampling_rate=target_sample_rate, return_tensors="pt"
).input_features
input_features = input_features.to(device)
prompt_ids = processor.get_prompt_ids(prompt.lower(), return_tensors="pt")
prompt_ids = prompt_ids.to(device)
# generate token ids by running model forward sequentially
with torch.no_grad():
predicted_ids = model.generate(
input_features,
prompt_ids=prompt_ids,
generation_config=model.generation_config,
language="en",
)
# post-process token ids to text, remove prompt
transcription = processor.batch_decode(
predicted_ids, skip_special_tokens=True
)[0]
print(transcription)
```
|
{"datasets": ["numind/NuNER"], "language": ["en"], "license": "mit", "pipeline_tag": "automatic-speech-recognition", "tags": ["asr", "Automatic Speech Recognition", "Whisper", "Named entity recognition"]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 43,520 |
xusenlin/duee-gplinker
|
xusenlin
| null |
[
"transformers",
"pytorch",
"bert",
"event extraction",
"zh",
"dataset:DuEE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-06-19T14:22:12Z |
2023-06-19T14:53:10+00:00
| 28 | 0 |
---
datasets:
- DuEE
language:
- zh
license: apache-2.0
metrics:
- f1
tags:
- event extraction
---
# GPLinker事件抽取模型
## 模型介绍
+ 数据集:百度 `DUEE` 信息抽取
+ 模型方法:[GPLinker:基于GlobalPointer的事件联合抽取](https://spaces.ac.cn/archives/8926)
## 使用方法
```commandline
pip install litie
```
```python
from pprint import pprint
from litie.pipelines import EventExtractionPipeline
pipeline = EventExtractionPipeline("gplinker", model_name_or_path="xusenlin/duee-gplinker", model_type="bert")
text = "油服巨头哈里伯顿裁员650人 因美国油气开采活动放缓。"
pprint(pipeline(text))
# 输出
[
[
{
"event_type": "组织关系-裁员",
"arguments": [
{
"role": "裁员人数",
"argument": "650人"
},
{
"role": "裁员方",
"argument": "油服巨头哈里伯顿"
}
]
}
]
]
```
模型训练和推理的详细代码见 [litie](https://github.com/xusenlinzy/lit-ie)
| null |
Non_BioNLP
|
# GPLinker事件抽取模型
## 模型介绍
+ 数据集:百度 `DUEE` 信息抽取
+ 模型方法:[GPLinker:基于GlobalPointer的事件联合抽取](https://spaces.ac.cn/archives/8926)
## 使用方法
```commandline
pip install litie
```
```python
from pprint import pprint
from litie.pipelines import EventExtractionPipeline
pipeline = EventExtractionPipeline("gplinker", model_name_or_path="xusenlin/duee-gplinker", model_type="bert")
text = "油服巨头哈里伯顿裁员650人 因美国油气开采活动放缓。"
pprint(pipeline(text))
# 输出
[
[
{
"event_type": "组织关系-裁员",
"arguments": [
{
"role": "裁员人数",
"argument": "650人"
},
{
"role": "裁员方",
"argument": "油服巨头哈里伯顿"
}
]
}
]
]
```
模型训练和推理的详细代码见 [litie](https://github.com/xusenlinzy/lit-ie)
|
{"datasets": ["DuEE"], "language": ["zh"], "license": "apache-2.0", "metrics": ["f1"], "tags": ["event extraction"]}
|
task
|
[
"EVENT_EXTRACTION"
] | 43,521 |
wellmadenametag/results
|
wellmadenametag
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-19T19:41:10Z |
2023-06-19T19:51:28+00:00
| 8 | 0 |
---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
- precision
- recall
tags:
- generated_from_trainer
model-index:
- name: results
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train[:5000]
args: plain_text
metrics:
- type: accuracy
value: 1.0
name: Accuracy
- type: f1
value: 0.0
name: F1
- type: precision
value: 0.0
name: Precision
- type: recall
value: 0.0
name: Recall
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 0.0
- Precision: 0.0
- Recall: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 0.0091 | 0.43 | 100 | 0.0066 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0012 | 0.85 | 200 | 0.0009 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0004 | 1.28 | 300 | 0.0002 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 1.7 | 400 | 0.0001 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 2.13 | 500 | 0.0000 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 2.55 | 600 | 0.0000 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 2.98 | 700 | 0.0000 | 1.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 0.0
- Precision: 0.0
- Recall: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 0.0091 | 0.43 | 100 | 0.0066 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0012 | 0.85 | 200 | 0.0009 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0004 | 1.28 | 300 | 0.0002 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 1.7 | 400 | 0.0001 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 2.13 | 500 | 0.0000 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 2.55 | 600 | 0.0000 | 1.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 2.98 | 700 | 0.0000 | 1.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1", "precision", "recall"], "tags": ["generated_from_trainer"], "model-index": [{"name": "results", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "train[:5000]", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}, {"type": "f1", "value": 0.0, "name": "F1"}, {"type": "precision", "value": 0.0, "name": "Precision"}, {"type": "recall", "value": 0.0, "name": "Recall"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,522 |
NikiPshg/g2p_with_stress
|
NikiPshg
| null |
[
"g2p",
"grapheme",
"phoneme",
"text2text",
"text-generation-inference",
"en",
"license:mit",
"region:us"
] | 2024-08-05T19:13:23Z |
2024-11-01T22:38:16+00:00
| 0 | 0 |
---
language:
- en
license: mit
metrics:
- wer
tags:
- g2p
- grapheme
- phoneme
- text2text
- text-generation-inference
---
# Grapheme to Phoneme (G2P) with Stress
This project provides a Grapheme to Phoneme (G2P) conversion tool that first checks the CMU Pronouncing Dictionary for phoneme translations. If a word is not found in the dictionary, it utilizes two Transformer-based models to generate phoneme translations and add stress markers. The output is in ARPAbet format, and the model can also convert graphemes into phoneme integer indices.
## Features
1. **CMU Pronouncing Dictionary Integration**: First checks the CMU dictionary for phoneme translations.
2. **Transformer-Based Conversion**:
- **Phoneme Generation**: The first Transformer model converts graphemes into phonemes.
- **Stress Addition**: The second Transformer model adds stress markers to the phonemes.
3. **ARPAbet Output**: Outputs phonemes in ARPAbet format.
4. **Phoneme Integer Indices**: Converts graphemes to phoneme integer indices.
5. A BPE tokenizer was used, which led to a better translation quality
## Installation
1. Clone the repository:
```sh
git clone https://github.com/NikiPshg/Grapheme-to-Phoneme-G2P-with-Stress.git
cd Grapheme-to-Phoneme-G2P-with-Stress
```
2. Install the required dependencies:
```sh
pip install -r requiremenst.txt
```
### Example
```python
from G2P_lexicon import g2p_en_lexicon
# Initialize the G2P converter
g2p = g2p_en_lexicon()
# Convert a word to phonemes
text = "text, numbers, and some strange symbols !№;% 21"
phonemes = g2p(text, with_stress=False)
['T', 'EH', 'K', 'S', 'T', ' ', ',', ' ',
'N', 'AH', 'M', 'B', 'ER', 'Z',' ', ',', ' ',
'AE', 'N', 'D', ' ', 'S', 'AH', 'M', ' ',
'S', 'T', 'R', 'EY', 'N', 'JH',' ',
'S', 'IH', 'M', 'B', 'AH', 'L', 'Z',' ',
'T', 'W', 'EH', 'N', 'IY', ' ', 'W', 'AH', 'N']
| null |
Non_BioNLP
|
# Grapheme to Phoneme (G2P) with Stress
This project provides a Grapheme to Phoneme (G2P) conversion tool that first checks the CMU Pronouncing Dictionary for phoneme translations. If a word is not found in the dictionary, it utilizes two Transformer-based models to generate phoneme translations and add stress markers. The output is in ARPAbet format, and the model can also convert graphemes into phoneme integer indices.
## Features
1. **CMU Pronouncing Dictionary Integration**: First checks the CMU dictionary for phoneme translations.
2. **Transformer-Based Conversion**:
- **Phoneme Generation**: The first Transformer model converts graphemes into phonemes.
- **Stress Addition**: The second Transformer model adds stress markers to the phonemes.
3. **ARPAbet Output**: Outputs phonemes in ARPAbet format.
4. **Phoneme Integer Indices**: Converts graphemes to phoneme integer indices.
5. A BPE tokenizer was used, which led to a better translation quality
## Installation
1. Clone the repository:
```sh
git clone https://github.com/NikiPshg/Grapheme-to-Phoneme-G2P-with-Stress.git
cd Grapheme-to-Phoneme-G2P-with-Stress
```
2. Install the required dependencies:
```sh
pip install -r requiremenst.txt
```
### Example
```python
from G2P_lexicon import g2p_en_lexicon
# Initialize the G2P converter
g2p = g2p_en_lexicon()
# Convert a word to phonemes
text = "text, numbers, and some strange symbols !№;% 21"
phonemes = g2p(text, with_stress=False)
['T', 'EH', 'K', 'S', 'T', ' ', ',', ' ',
'N', 'AH', 'M', 'B', 'ER', 'Z',' ', ',', ' ',
'AE', 'N', 'D', ' ', 'S', 'AH', 'M', ' ',
'S', 'T', 'R', 'EY', 'N', 'JH',' ',
'S', 'IH', 'M', 'B', 'AH', 'L', 'Z',' ',
'T', 'W', 'EH', 'N', 'IY', ' ', 'W', 'AH', 'N']
|
{"language": ["en"], "license": "mit", "metrics": ["wer"], "tags": ["g2p", "grapheme", "phoneme", "text2text", "text-generation-inference"]}
|
task
|
[
"TRANSLATION"
] | 43,523 |
phospho-app/phospho-small-4e0ec73
|
phospho-app
|
feature-extraction
|
[
"transformers",
"safetensors",
"mpnet",
"feature-extraction",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-04-30T11:03:16Z |
2024-04-30T11:04:45+00:00
| 6 | 0 |
---
language: en
license: apache-2.0
---
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("phospho-small-4e0ec73")
outputs = model.predict(["This is a sentence to classify", "Another sentence"])
# tensor([1, 0])
```
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
| null |
Non_BioNLP
|
# phospho-small
This is a SetFit model that can be used for Text Classification on CPU.
The model has been trained using an efficient few-shot learning technique.
## Usage
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("phospho-small-4e0ec73")
outputs = model.predict(["This is a sentence to classify", "Another sentence"])
# tensor([1, 0])
```
## References
This work was possible thanks to the SetFit library and the work of:
Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
|
{"language": "en", "license": "apache-2.0"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,524 |
sgadagin/fine_tuned_sbert
|
sgadagin
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:3742",
"loss:SoftmaxLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-14T00:25:19Z |
2025-03-14T00:25:25+00:00
| 4 | 0 |
---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:3742
- loss:SoftmaxLoss
widget:
- source_sentence: 'As the year draws to a close, we have seen the number of emerging
threats like advance phishing attacks from the Syrian Electronic Army, financial
malware and exploit kits, Cryptolocker ransomware infections, massive Bitcoin
theft, extensive privacy breach from NSA and many more.
The financial malware''s were the most popular threat this year. Money is always
a perfect motivation for attackers and cyber criminals who are continually targeting
financial institutions.
On Tuesday, Antivirus firm Symantec has released a Threat report, called "The
State of Financial Trojans: 2013", which revealed that over 1,400 financial institutions
have been targeted and compromised millions of computers around the globe and
the most targeted banks are in the US with 71.5% of all analyzed Trojans.
Financial institutions have been fighting against malware for the last ten years
to protect their customers and online transactions from threat. Over the time
the attackers adapted to these countermeasures and sophisticated banking Trojans
began to emerge.
According to the report, the number of infections of the most common financial
Trojans grew to 337 percent in the first nine months of 2013. Nearly 1,500 institutions
in 88 countries were potential targets during 2013.
The financial fraud marketplace is also increasingly organized and Cyber criminals
are using advanced Trojans to commit large scale attacks.
Attackers of all skill levels can enter the arena of financial fraud, as the underground
marketplace is a service industry that provides an abundance of resources. Those
who lack expertise can simply purchase what they need. For as little as $100,
an attacker can avail of a leaked Zeus or Spyeye equipped with Web-injects.
The modern financial Trojan is extremely flexible, supporting a range of functionality
designed to facilitate fraudulent transactions across a variety of services.
Two dominant attack strategies are:
Focused attack: This approach suits attackers with limited resources but also
scales well to larger operations. If the distribution is accurate and the target
institution has a sizeable client base, a focused attack can provide an adequate
supply of targets. Shylock, Bebloh and Tilon all use this approach exclusively.
Broad strokes: In this attack strategy, Trojans are set to target large numbers
of institutions. Tilon, Cridex, and Gameover adopt these tactics and Zeus also
uses this approach in its default configuration.
According to Symantec, the main reason for the surge is weak authentication practices:
Unfortunately, in many situations, security implementations adopted by financial
institutions are inadequate to defend against the modern financial Trojan. Institutions
are starting to adopt strong security measures like chipTAN, but the adoption
rate is slow. Institutions that persist with weaker security measures will continue
to be exploited by attackers.
They need to maintain constant vigilance, apply software updates, maintain an
awareness of new threats and deploy complementary security solutions that can
defend against evolving malware attacks.
'
sentences:
- 'As the year draws to a close, we have seen the number of emerging threats like
advance phishing attacks from the Syrian Electronic Army, financial malware and
exploit kits, Cryptolocker ransomware infections, massive Bitcoin theft, extensive
privacy breach from NSA and many more.
The financial malware''s were the most popular threat this year. Money is always
a perfect motivation for attackers and cyber criminals who are continually targeting
financial institutions.
On Tuesday, Antivirus firm Symantec has released a Threat report, called "The
State of Financial Trojans: 2013", which revealed that over 1,400 financial institutions
have been targeted and compromised millions of computers around the globe and
the most targeted banks are in the US with 71.5% of all analyzed Trojans.
Financial institutions have been fighting against malware for the last ten years
to protect their customers and online transactions from threat. Over the time
the attackers adapted to these countermeasures and sophisticated banking Trojans
began to emerge.
According to the report, the number of infections of the most common financial
Trojans grew to 337 percent in the first nine months of 2013. Nearly 1,500 institutions
in 88 countries were potential targets during 2013.
The financial fraud marketplace is also increasingly organized and Cyber criminals
are using advanced Trojans to commit large scale attacks.
Attackers of all skill levels can enter the arena of financial fraud, as the underground
marketplace is a service industry that provides an abundance of resources. Those
who lack expertise can simply purchase what they need. For as little as $100,
an attacker can avail of a leaked Zeus or Spyeye equipped with Web-injects.
The modern financial Trojan is extremely flexible, supporting a range of functionality
designed to facilitate fraudulent transactions across a variety of services.
Two dominant attack strategies are:
Focused attack: This approach suits attackers with limited resources but also
scales well to larger operations. If the distribution is accurate and the target
institution has a sizeable client base, a focused attack can provide an adequate
supply of targets. Shylock, Bebloh and Tilon all use this approach exclusively.
Broad strokes: In this attack strategy, Trojans are set to target large numbers
of institutions. Tilon, Cridex, and Gameover adopt these tactics and Zeus also
uses this approach in its default configuration.
According to Symantec, the main reason for the surge is weak authentication practices:
Unfortunately, in many situations, security implementations adopted by financial
institutions are inadequate to defend against the modern financial Trojan. Institutions
are starting to adopt strong security measures like chipTAN, but the adoption
rate is slow. Institutions that persist with weaker security measures will continue
to be exploited by attackers.
They need to maintain constant vigilance, apply software updates, maintain an
awareness of new threats and deploy complementary security solutions that can
defend against evolving malware attacks.
'
- 'While Windows users are currently in fear of getting their systems hijacked by
the WannaCry ransomware outbreak, Apple users are sitting relaxed, thinking that
malware attacks are something that happens to Windows users, and not Apple.
But you are mistaken – Apple products are also not immune to the hack attacks
and malware infections, as an ebook can hack your Mac, iPhone, and iPad.
Apple on Monday pushed out software updates for iOS, macOS, Safari, tvOS, iCloud,
iTunes, and watchOS to fix a total of 67 unique security vulnerabilities, many
of which allows attackers to perform remote code execution on an affected system.
iOS is 10.3.2 for iPhone, iPad, and iPod
Apple''s mobile operating system iOS 10.3.2 for the iPhone, iPad and iPod touch
addresses 41 security flaws, 23 of which resides in WebKit, including 17 remote
code execution and 5 cross-site scripting (XSS) vulnerabilities.
Besides this, iOS 10.3.2 also addresses a pair of flaws in iBooks for iOS (CVE-2017-2497,
CVE-2017-6981) that could allow e-books to open arbitrary websites and execute
malicious code with root privileges.
Other flaws addressed in iOS 10.3.2 include a memory corruption issue in AVE Video
Encoder that could allow a malicious application to gain kernel-level privileges,
and a certificate validation issue in the certificate trust policy for handling
of untrusted certificates.
Apple users can install iOS 10.3.2 by connecting their iOS devices to iTunes or
downloading it directly by going to the Settings → General → Software Update.
macOS Sierra 10.12.5 for El Capitan and Yosemite
Apple''s Mac operating system macOS Sierra 10.12.5 addresses a total of 37 vulnerabilities,
including a pair of bugs in iBook that allow the execution of arbitrary code with
root privileges, and a separate bug in iBook that allows an application to escape
its secure sandbox.
Other flaws addressed in macOS Sierra 10.12.5 include a Wi-Fi networking issue
that allows the theft of network credentials, elevation of privilege bugs in both
the Intel and Nvidia graphics drivers, and four different arbitrary code execution
flaws in SQLite.
Mac users can download the update through the App Store → Updates. Alternatively,
macOS Sierra users can be download Sierra 10.12.5 as a stand-alone update, OS
X El Capitan users can download the update here, and OS X Yosemite users can get
the security update here.
Safari 10.1.1 for Apple Browser
Safari 10.1.1 addresses a total of 26 security issues, 23 of which resides in
WebKit, many of which are also patched in the iOS 10.3.2.
Rest three vulnerabilities are patched in the Safari browser itself.
The Safari 10.1.1 update can be downloaded by going to the App Store → Updates
on El Capitan and Yosemite systems.
watchOS 3.2.2 for Apple Watch
Apple Watch users should install watchOS 3.2.2 that patches a total of 12 security
vulnerabilities, four of which could be used by attackers to execute remote code
execution on the affected device.
Users of Apple Watch can download watchOS 3.2.2 by connecting their watch to its
charger, and opening the Apple Watch app → My Watch tab → General → Software Update
on their iPhone.
tvOS 10.2.1 for Apple TV
Apple has also released tvOS 10.2.1 to patch a total of 23 vulnerabilities, 12
of which resides in WebKit engine that could allow an attacker to perform cross-site
scripting and remote code execution attacks on a target device.
The tvOS 10.2.1 update can be downloaded directly from the Apple TV by going to
Settings → System → Update Software.
iTunes 12.6.1 for Windows and iCloud for Windows 6.2.1
Meanwhile, Apple also released patches for Windows users using iTunes and iCloud.
Both iTunes 12.6.1 and iCloud 6.2.1 patches a single remote code execution bug
in WebKit for Windows 7 and later.
Apple users are recommended to update all their operating systems for Apple products
and Safari as soon as possible before cyber criminals exploited them. Patches
are available through automatic updates.
'
- 'A really bad year for the world''s second-largest email service provider, Yahoo
Mail! The company announced today, ''we identified a coordinated effort to gain
unauthorized access to Yahoo Mail accounts'', user names and passwords of its
email customers have been stolen and are used to access multiple accounts.
Yahoo did not say how many accounts have been affected, and neither they are sure
about the source of the leaked users'' credentials. It appears to have come from
a third party database being compromised, and not an infiltration of Yahoo''s
own servers.
"We have no evidence that they were obtained directly from Yahoo''s systems. Our
ongoing investigation shows that malicious computer software used the list of
usernames and passwords to access Yahoo Mail accounts. The information sought
in the attack seems to be names and email addresses from the affected accounts''
most recent sent emails."
For now, Yahoo is taking proactive actions to protect their affected users, "We
are resetting passwords on impacted accounts and we are using second sign-in verification
to allow users to re-secure their accounts. Impacted users will be prompted (if
not, already) to change their password and may receive an email notification or
an SMS text if they have added a mobile number to their account."
People frequently use the same passwords on multiple accounts, so possibly hackers
are brute-forcing Yahoo accounts with the user credentials stolen from other data
breaches.
Yahoo users can prevent account hijacks by using a strong and unique password.
You can use ''Random strong password generator'' feature of DuckDuckGo search
engine to get a unique & strong password.
Users are also recommended to enable two-factor authentication, which requires
a code texted to the legitimate user''s mobile phone whenever a login attempt
is made from a new computer.
Yahoo! was hacked in July 2012, with attackers stealing 450,000 email addresses
and passwords from a Yahoo! contributor network.
Readers can also download two free Whitepaper related to the Email and account
security:
Cloud-Based Email Archiving
Email Data Loss Prevention
Well, Yahoo is now working with federal law enforcement as a part of its investigation.
'
- source_sentence: 'Security researchers have spotted a new malware campaign in the
wild that spreads an advanced botnet malware by leveraging at least three recently
disclosed vulnerabilities in Microsoft Office.
Dubbed Zyklon, the fully-featured malware has resurfaced after almost two years
and primarily found targeting telecommunications, insurance and financial services.
Active since early 2016, Zyklon is an HTTP botnet malware that communicates with
its command-and-control servers over Tor anonymising network and allows attackers
to remotely steal keylogs, sensitive data, like passwords stored in web browsers
and email clients.
Zyklon malware is also capable of executing additional plugins, including secretly
using infected systems for DDoS attacks and cryptocurrency mining.
Different versions of the Zyklon malware has previously been found being advertised
on a popular underground marketplace for $75 (normal build) and $125 ( Tor-enabled
build).
According to a recently published report by FireEye, the attackers behind the
campaign are leveraging three following vulnerabilities in Microsoft Office that
execute a PowerShell script on the targeted computers to download the final payload
from its C&C server.
1) .NET Framework RCE Vulnerability (CVE-2017-8759)—this remote code execution
vulnerability exists when Microsoft .NET Framework processes untrusted input,
allowing an attacker to take control of an affected system by tricking victims
into opening a specially crafted malicious document file sent over an email. Microsoft
already released a security patch for this flaw in September updates.
2) Microsoft Office RCE Vulnerability (CVE-2017-11882)—it''s a 17-year-old memory
corruption flaw that Microsoft patched in November patch update allows a remote
attacker to execute malicious code on the targeted systems without requiring any
user interaction after opening a malicious document.
3) Dynamic Data Exchange Protocol (DDE Exploit)—this technique allows attackers
to leverage a built-in feature of Microsoft Office, called DDE, to perform code
execution on the targeted device without requiring Macros to be enabled or memory
corruption.
As explained by the researchers, attackers are actively exploiting these three
vulnerabilities to deliver Zyklon malware using spear phishing emails, which typically
arrives with an attached ZIP file containing a malicious Office doc file.
Once opened, the malicious doc file equipped with one of these vulnerabilities
immediately runs a PowerShell script, which eventually downloads the final payload,
i.e., Zyklon HTTP malware, onto the infected computer.
"In all these techniques, the same domain is used to download the next level payload
(Pause.ps1), which is another PowerShell script that is Base64 encoded," the FireEye
researchers said.
"The Pause.ps1 script is responsible for resolving the APIs required for code
injection. It also contains the injectable shellcode."
"The injected code is responsible for downloading the final payload from the server.
The final stage payload is a PE executable compiled with .Net framework."
Interestingly, the PowerShell script connects to a dotless IP address (example:
https://3627732942) to download the final payload.
What is Dotless IP Address? If you are unaware, dotless IP addresses, sometimes
referred as ''Decimal Address,'' are decimal values of IPv4 addresses (represented
as dotted-quad notation). Almost all modern web browsers resolve decimal IP address
to its equivalent IPV4 address when opened with "https://" following the decimal
value.
For example, Google''s IP address 216.58.207.206 can also be represented as https://3627732942
in decimal values (Try this online converter).
The best way to protect yourself and your organisation from such malware attacks
are always to be suspicious of any uninvited document sent via an email and never
click on links inside those documents unless adequately verifying the source.
Most importantly, always keep your software and systems up-to-date, as threat
actors incorporate recently discovered, but patched, vulnerabilities in popular
software—Microsoft Office, in this case—to increase the potential for successful
infections.
'
sentences:
- 'India-linked highly targeted mobile malware campaign, first unveiled two weeks
ago, has been found to be part of a broader campaign targeting multiple platforms,
including windows devices and possibly Android as well.
As reported in our previous article, earlier this month researchers at Talos threat
intelligence unit discovered a group of Indian hackers abusing mobile device management
(MDM) service to hijack and spy on a few targeted iPhone users in India.
Operating since August 2015, the attackers have been found abusing MDM service
to remotely install malicious versions of legitimate apps, including Telegram,
WhatsApp, and PrayTime, onto targeted iPhones.
These modified apps have been designed to secretly spy on iOS users, and steal
their real-time location, SMS, contacts, photos and private messages from third-party
chatting applications.
During their ongoing investigation, Talos researchers identified a new MDM infrastructure
and several malicious binaries – designed to target victims running Microsoft
Windows operating systems – hosted on the same infrastructure used in previous
campaigns.
Ios-update-whatsapp[.]com (new)
Wpitcher[.]com
Ios-certificate-update.com
"We know that the MDM and the Windows services were up and running on the same
C2 server in May 2018," researchers said in a blog post published today.
"Some of the C2 servers are still up and running at this time. The Apache setup
is very specific, and perfectly matched the Apache setup of the malicious IPA
apps."
Possible Connections with "Bahamut Hacking Group"
Besides this, researchers also found some potential similarities that link this
campaign with an old hacking group, dubbed "Bahamut," an advanced threat actor
who was previously targeting Android devices using similar MDM technique as used
in the latest iOS malware campaign.
The newly identified MDM infrastructure, which was created in January 2018, and
used from January to March of this year, targeted two Indian devices and one located
in Qatar with a British phone number.
According to the researchers, Bahamut also targeted similar Qatar-based individuals
during their Android malware campaign, as detailed by Bellingcat in a blog post.
"Bahamut shared a domain name with one of the malicious iOS applications mentioned
in our previous post," researchers said.
"The new MDM platform we identified has similar victimology with Middle Eastern
targets, namely Qatar, using a U.K. mobile number issued from LycaMobile. Bahamut
targeted similar Qatar-based individuals during their campaign."
Apart from distributing modified Telegram and WhatsApp apps with malicious functionalities,
the newly-identified server also distributes modified versions of Safari browser
and IMO video chatting app to steal more personal information on victims.
Attackers Using Malicious Safari Browser to Steal Login Credentials
According to the researchers, the malicious Safari browser has been pre-configured
to automatically exfiltrate the username and the password of the users for a variety
of other web services, Yahoo, Rediff, Amazon, Google, Reddit, Baidu, ProtonMail,
Zoho, Tutanota and more.
"The malware continuously monitors a web page, seeking out the HTML form fields
that hold the username and password as the user types them in to steal credentials.
The names of the inspected HTML fields are embedded into the app alongside the
domain names," the researchers said.
The malicious browser contains three malicious plugins—Add Bookmark, Add To Favourites,
and Add to Reading List—that just like the other apps, send stolen data to a remote
attacker-controlled server.
At this time, it''s unclear who is behind the campaign, who was targeted in the
campaign, and what were the motives behind the attack, but the technical elements
suggest the attackers are operating from India, and are well-funded.
Researchers said that those infected with this kind of malware need to enroll
their devices, which means "they should be on the lookout at all times to avoid
accidental enrollment."
The best way to avoid being a victim to such attacks is to always download apps
from official app store.
'
- 'Security researchers have spotted a new malware campaign in the wild that spreads
an advanced botnet malware by leveraging at least three recently disclosed vulnerabilities
in Microsoft Office.
Dubbed Zyklon, the fully-featured malware has resurfaced after almost two years
and primarily found targeting telecommunications, insurance and financial services.
Active since early 2016, Zyklon is an HTTP botnet malware that communicates with
its command-and-control servers over Tor anonymising network and allows attackers
to remotely steal keylogs, sensitive data, like passwords stored in web browsers
and email clients.
Zyklon malware is also capable of executing additional plugins, including secretly
using infected systems for DDoS attacks and cryptocurrency mining.
Different versions of the Zyklon malware has previously been found being advertised
on a popular underground marketplace for $75 (normal build) and $125 ( Tor-enabled
build).
According to a recently published report by FireEye, the attackers behind the
campaign are leveraging three following vulnerabilities in Microsoft Office that
execute a PowerShell script on the targeted computers to download the final payload
from its C&C server.
1) .NET Framework RCE Vulnerability (CVE-2017-8759)—this remote code execution
vulnerability exists when Microsoft .NET Framework processes untrusted input,
allowing an attacker to take control of an affected system by tricking victims
into opening a specially crafted malicious document file sent over an email. Microsoft
already released a security patch for this flaw in September updates.
2) Microsoft Office RCE Vulnerability (CVE-2017-11882)—it''s a 17-year-old memory
corruption flaw that Microsoft patched in November patch update allows a remote
attacker to execute malicious code on the targeted systems without requiring any
user interaction after opening a malicious document.
3) Dynamic Data Exchange Protocol (DDE Exploit)—this technique allows attackers
to leverage a built-in feature of Microsoft Office, called DDE, to perform code
execution on the targeted device without requiring Macros to be enabled or memory
corruption.
As explained by the researchers, attackers are actively exploiting these three
vulnerabilities to deliver Zyklon malware using spear phishing emails, which typically
arrives with an attached ZIP file containing a malicious Office doc file.
Once opened, the malicious doc file equipped with one of these vulnerabilities
immediately runs a PowerShell script, which eventually downloads the final payload,
i.e., Zyklon HTTP malware, onto the infected computer.
"In all these techniques, the same domain is used to download the next level payload
(Pause.ps1), which is another PowerShell script that is Base64 encoded," the FireEye
researchers said.
"The Pause.ps1 script is responsible for resolving the APIs required for code
injection. It also contains the injectable shellcode."
"The injected code is responsible for downloading the final payload from the server.
The final stage payload is a PE executable compiled with .Net framework."
Interestingly, the PowerShell script connects to a dotless IP address (example:
https://3627732942) to download the final payload.
What is Dotless IP Address? If you are unaware, dotless IP addresses, sometimes
referred as ''Decimal Address,'' are decimal values of IPv4 addresses (represented
as dotted-quad notation). Almost all modern web browsers resolve decimal IP address
to its equivalent IPV4 address when opened with "https://" following the decimal
value.
For example, Google''s IP address 216.58.207.206 can also be represented as https://3627732942
in decimal values (Try this online converter).
The best way to protect yourself and your organisation from such malware attacks
are always to be suspicious of any uninvited document sent via an email and never
click on links inside those documents unless adequately verifying the source.
Most importantly, always keep your software and systems up-to-date, as threat
actors incorporate recently discovered, but patched, vulnerabilities in popular
software—Microsoft Office, in this case—to increase the potential for successful
infections.
'
- 'Attention WordPress users!
Your website could easily get hacked if you are using "Ultimate Addons for Beaver
Builder," or "Ultimate Addons for Elementor" and haven''t recently updated them
to the latest available versions.
Security researchers have discovered a critical yet easy-to-exploit authentication
bypass vulnerability in both widely-used premium WordPress plugins that could
allow remote attackers to gain administrative access to sites without requiring
any password.
What''s more worrisome is that opportunistic attackers have already started exploiting
this vulnerability in the wild within 2 days of its discovery in order to compromise
vulnerable WordPress websites and install a malicious backdoor for later access.
Both vulnerable plugins, made by software development company Brainstorm Force,
are currently powering over hundreds of thousands of WordPress websites using
Elementor and Beaver Builder frameworks, helping website admins and designers
extend the functionality of their websites with more widgets, modules, page templates.
Discovered by researchers at web security service MalCare, the vulnerability resides
in the way both plugins let WordPress account holders, including administrators,
authenticate via Facebook and Google login mechanisms.
Image credit: WebARX
According to the vulnerability''s advisory, due to lack of checks in the authentication
method when a user login via Facebook or Google, vulnerable plugins can be tricked
into allowing malicious users to login as any other targeted user without requiring
any password.
"However, the Facebook and Google authentication methods did not verify the token
returned by Facebook and Google, and since they don''t require a password, there
was no password check," explained WebARX researchers, who also analysed the flaw
and confirmed its active exploitation.
"To exploit the vulnerability, the hacker needs to use the email ID of an admin
user of the site. In most cases, this information can be retrieved fairly easily,"
MalCare said.
In an email to The Hacker News, WebARX confirmed that attackers are abusing this
flaw to install a fake SEO stats plugin after uploading a tmp.zip file on the
targeted WordPress server, which eventually drops a wp-xmlrpc.php backdoor file
to the root directory of the vulnerable site.
MalCare discovered this vulnerability on Wednesday that affects below-listed versions
of the plugins and reported it to the developers on the same day, who then quickly
addressed the issue and released patched versions of both within just 7 hours.
Ultimate Addons for Elementor <= 1.20.0
Ultimate Addons for Beaver Builder <= 1.24.0
The authentication bypass vulnerability has been patched with the release of "Ultimate
Addons for Elementor version 1.20.1" and "Ultimate Addons for Beaver Builder version
1.24.1," which affected websites are highly recommended to install as soon as
possible.
'
- source_sentence: 'Exclusive — If you have not updated your website to the latest
WordPress version 5.0.3, it''s a brilliant idea to upgrade the content management
software of your site now. From now, I mean immediately.
Cybersecurity researchers at RIPS Technologies GmbH today shared their latest
research with The Hacker News, revealing the existence of a critical remote code
execution vulnerability that affects all previous versions of WordPress content
management software released in the past 6 years.
The remote code execution attack, discovered and reported to the WordPress security
team late last year, can be exploited by a low privileged attacker with at least
an "author" account using a combination of two separate vulnerabilities—Path Traversal
and Local File Inclusion—that reside in the WordPress core.
The requirement of at least an author account reduces the severity of this vulnerability
to some extent, which could be exploited by a rogue content contributor or an
attacker who somehow manages to gain author''s credential using phishing, password
reuse or other attacks.
"An attacker who gains access to an account with at least author privileges on
a target WordPress site can execute arbitrary PHP code on the underlying server,
leading to a full remote takeover," Scannell says.
Video Demonstration — Here''s How the Attack Works
According to Simon Scannell, a researcher at RIPS Technologies GmbH, the attack
takes advantage of the way WordPress image management system handles Post Meta
entries used to store description, size, creator, and other meta information of
uploaded images.
Scannell found that a rogue or compromised author account can modify any entries
associated with an image and set them to arbitrary values, leading to the Path
Traversal vulnerability.
"The idea is to set _wp_attached_file to evil.jpg?shell.php, which would lead
to an HTTP request being made to the following URL: https://targetserver.com/wp-content/uploads/evil.jpg?shell.php,"
Scannell explains.
And, "it is still possible to plant the resulting image into any directory by
using a payload such as evil.jpg?/../../evil.jpg."
The Path Traversal flaw in combination with a local file inclusion flaw in theme
directory could then allow the attacker to execute arbitrary code on the targeted
server.
The attack, as shown in the proof-of-concept video shared by the researcher, can
be executed within seconds to gain complete control over a vulnerable WordPress
blog.
According to Scannell, the code execution attack became non-exploitable in WordPress
versions 5.0.1 and 4.9.9 after patch for another vulnerability was introduced
which prevented unauthorized users from setting arbitrary Post Meta entries.
However, the Path Traversal flaw is still unpatched even in the latest WordPress
version and can be exploited if any installed 3rd-party plugin incorrectly handles
Post Meta entries.
Scannell confirmed that the next release of WordPress would include a fix to completely
address the issue demonstrated by the researcher.
'
sentences:
- 'Exclusive — If you have not updated your website to the latest WordPress version
5.0.3, it''s a brilliant idea to upgrade the content management software of your
site now. From now, I mean immediately.
Cybersecurity researchers at RIPS Technologies GmbH today shared their latest
research with The Hacker News, revealing the existence of a critical remote code
execution vulnerability that affects all previous versions of WordPress content
management software released in the past 6 years.
The remote code execution attack, discovered and reported to the WordPress security
team late last year, can be exploited by a low privileged attacker with at least
an "author" account using a combination of two separate vulnerabilities—Path Traversal
and Local File Inclusion—that reside in the WordPress core.
The requirement of at least an author account reduces the severity of this vulnerability
to some extent, which could be exploited by a rogue content contributor or an
attacker who somehow manages to gain author''s credential using phishing, password
reuse or other attacks.
"An attacker who gains access to an account with at least author privileges on
a target WordPress site can execute arbitrary PHP code on the underlying server,
leading to a full remote takeover," Scannell says.
Video Demonstration — Here''s How the Attack Works
According to Simon Scannell, a researcher at RIPS Technologies GmbH, the attack
takes advantage of the way WordPress image management system handles Post Meta
entries used to store description, size, creator, and other meta information of
uploaded images.
Scannell found that a rogue or compromised author account can modify any entries
associated with an image and set them to arbitrary values, leading to the Path
Traversal vulnerability.
"The idea is to set _wp_attached_file to evil.jpg?shell.php, which would lead
to an HTTP request being made to the following URL: https://targetserver.com/wp-content/uploads/evil.jpg?shell.php,"
Scannell explains.
And, "it is still possible to plant the resulting image into any directory by
using a payload such as evil.jpg?/../../evil.jpg."
The Path Traversal flaw in combination with a local file inclusion flaw in theme
directory could then allow the attacker to execute arbitrary code on the targeted
server.
The attack, as shown in the proof-of-concept video shared by the researcher, can
be executed within seconds to gain complete control over a vulnerable WordPress
blog.
According to Scannell, the code execution attack became non-exploitable in WordPress
versions 5.0.1 and 4.9.9 after patch for another vulnerability was introduced
which prevented unauthorized users from setting arbitrary Post Meta entries.
However, the Path Traversal flaw is still unpatched even in the latest WordPress
version and can be exploited if any installed 3rd-party plugin incorrectly handles
Post Meta entries.
Scannell confirmed that the next release of WordPress would include a fix to completely
address the issue demonstrated by the researcher.
'
- 'Android Security Squad, the China-based group that uncovered a second Android
master key vulnerability that might be abused to modify smartphone apps without
breaking their digital signatures.
The whole point of digitally signing a document or file is to prove the file hasn''t
been modified. The process uses a form of public-key cryptography. In Chinese
version of hacking attack, malicious code can be added into the file headers,
but the method is limited because targeted files need to be smaller than 64K in
size.
APK files are packed using a version of the widespread ZIP archiving algorithm.
Most ZIP implementations won''t permit two same-named files in one archive, but
the algorithm itself doesn''t forbid that possibility. So basically, two versions
of the classes.dex file are placed inside of the package, the original and a hacked
alternative.
When checking an app''s digital signature, the Android OS looks at the first matching
file, but when actually executing and launching the file, it grabs the last one.
To Trojanize an app, then, all you need to do is shoehorn your malicious code
into it using a name that already exists within the app.
The flaw is very similar to the first master key vulnerability recently announced
by researchers from mobile security firm Bluebox Security. According to BlueBox,
99% of Android devices are vulnerable to this attack. Google has already patched
the flaw and posted it to the Android Open Source Project (AOSP).
You can use ReKey, a free mobile app that''s designed to patch the Android master
key vulnerability that''s present in an estimated 900 million devices that run
Android and that could be exploited by attackers to take full control of a device.
Always get your apps from legitimate sources, always check to make sure the developer
name is valid, and configure your phone so it doesn''t permit installing apps
from unknown sources.
'
- 'Cyber criminals are using popular note-taking app Evernote as Command-and-Control
Server to give commands to the malware installed on infected PCs using botnets.
TrendMicro uncovered a malware detected as "BKDR_VERNOT.A" tried to communicate
with Command-and-Control Server using Evernote.
Malware delivered via an executable file that installs the malware as a dynamic-link
library. The installer then ties the DLL into a legitimate running process, hiding
it from casual detection. Once installed, BKDR_VERNOT.A can perform several backdoor
commands such as downloading, executing, and renaming files. It then gathers information
from the infected system, including details about its OS, timezone, user name,
computer name, registered owner and organization.
Researchers also pointed out that the backdoor may have also used Evernote as
a location to upload stolen data. "Unfortunately, during our testing, it was not
able to login using the credentials embedded in the malware. This is possibly
a security measure imposed by Evernote following its recent hacking issue."
"Though this is a clever maneuver to avoid detection, this is not the first time
that a legitimate service like Evernote was used as a method of evasion."
Like Evernote, Google Docs, Twitter and others have been misused in the past.
'
- source_sentence: 'U.S. has the top Security Agencies like NSA, FBI to tackle cyber
crime and terrorism with their high profile surveillance technologies, but even
after that U.S is proudly hosting 44% of the entire cloud based malware distribution.
With the enhancement in Internet technology, Cloud computing has shown the possibility
of existence and now has become an essential gradient for any Internet Identity.
Cloud services are designed in such a way that it is easy to maintain, use, configure
and can be scaled depending upon the requirement of the service being provided
using the CLOUD technology with cost effective manner.
Due to the Easy and Cost effective alternative of traditional computing, Malware
writers are using the big cloud hosting platforms to quickly and effectively serve
malware to Internet users, allowing them to bypass detection and geographic blacklisting
by serving from a trusted provider.
Hiding behind trusted domains and names is not something new. According to recently
published SERT Q4 2013 Threat Intelligence Report, the malware distributors are
using Cloud Services from Amazon, GoDaddy and Google like a legitimate customer,
allowing them to infect millions of computers and vast numbers of enterprise systems.
The Cloud-based hosting services let malware distributors to avoid the detection
because repeatedly changes IP addresses and domain names to avoid detection. Amazon
and GoDaddy were identified as the top malware-hosting providers, with a 16 percent
and a 14 percent share, respectively.
Major Additional findings include:
United States hosts 4.6 times more malware than the next leading country.
58% of malicious files obtained were identified as HTML files, 26% were directly
executable.
Many malware developers and distributors are utilizing social engineering tactics,
including the use of trusted keywords and services, to evade detection and increase
potential infection counts.
A single malicious domain was spread across 20 countries, 67 providers and 199
unique IPs evade detection.
The SERT Research team collected a large number of samples from more than 12,000
Registrars, 22,000 ISPs (Internet Service Providers) and tested all malicious
packages with more than 40 antivirus engines, output of which is concluded below:
The majority of the top malware sites is domains commonly associated with the
Potentially Unwanted Applications (PUA), more commonly known as adware, type of
malware distributions.
"Researchers found that a significant portion of the malware sampled consisted
of Microsoft Windows 32-bit Portable Executable (PE32) files being used to distribute
pay-per-install applications known as potentially unwanted applications (PUAs)."
The report claimed that these malware is undetectable from over 40 anti-virus
engines, that can act as a gateway for exploits and more than half of malware
found being distributed by HTML web pages.
'
sentences:
- 'U.S. has the top Security Agencies like NSA, FBI to tackle cyber crime and terrorism
with their high profile surveillance technologies, but even after that U.S is
proudly hosting 44% of the entire cloud based malware distribution.
With the enhancement in Internet technology, Cloud computing has shown the possibility
of existence and now has become an essential gradient for any Internet Identity.
Cloud services are designed in such a way that it is easy to maintain, use, configure
and can be scaled depending upon the requirement of the service being provided
using the CLOUD technology with cost effective manner.
Due to the Easy and Cost effective alternative of traditional computing, Malware
writers are using the big cloud hosting platforms to quickly and effectively serve
malware to Internet users, allowing them to bypass detection and geographic blacklisting
by serving from a trusted provider.
Hiding behind trusted domains and names is not something new. According to recently
published SERT Q4 2013 Threat Intelligence Report, the malware distributors are
using Cloud Services from Amazon, GoDaddy and Google like a legitimate customer,
allowing them to infect millions of computers and vast numbers of enterprise systems.
The Cloud-based hosting services let malware distributors to avoid the detection
because repeatedly changes IP addresses and domain names to avoid detection. Amazon
and GoDaddy were identified as the top malware-hosting providers, with a 16 percent
and a 14 percent share, respectively.
Major Additional findings include:
United States hosts 4.6 times more malware than the next leading country.
58% of malicious files obtained were identified as HTML files, 26% were directly
executable.
Many malware developers and distributors are utilizing social engineering tactics,
including the use of trusted keywords and services, to evade detection and increase
potential infection counts.
A single malicious domain was spread across 20 countries, 67 providers and 199
unique IPs evade detection.
The SERT Research team collected a large number of samples from more than 12,000
Registrars, 22,000 ISPs (Internet Service Providers) and tested all malicious
packages with more than 40 antivirus engines, output of which is concluded below:
The majority of the top malware sites is domains commonly associated with the
Potentially Unwanted Applications (PUA), more commonly known as adware, type of
malware distributions.
"Researchers found that a significant portion of the malware sampled consisted
of Microsoft Windows 32-bit Portable Executable (PE32) files being used to distribute
pay-per-install applications known as potentially unwanted applications (PUAs)."
The report claimed that these malware is undetectable from over 40 anti-virus
engines, that can act as a gateway for exploits and more than half of malware
found being distributed by HTML web pages.
'
- 'Windows 8 will be challenge for Malware writers
Microsoft™s security researcher believe that upcoming operating system, Windows
8 is a step forward in security and Windows 8 will be far better at protecting
against malware than it''s predecessors.
Chris Valasek, a senior security research scientist at development testing firm
Coverity, began examining the security features of Windows 8 last autumn, before
the consumer previews of the upcoming revamp of the new Microsoft OS came out.
"There are always going to be vulnerabilities but you can make it difficult to
leverage vulnerabilities to write exploits." One major change between Windows
7 and 8 is the addition of more exploit-mitigation technologies, however. Windows
Memory Managers (specifically the Windows Heap Manager and Windows Kernel Pool
Allocator) are designed to make it far harder for attackers to exploit buffer-overflow
vulnerabilities and the like to push malware onto vulnerable systems.
The "security sandbox" for applications for Windows 8 will also be a great step
forward. "These new Windows 8 Apps will be contained by a much more restrictive
security sandbox, which is a mechanism to prevent programs from performing certain
actions," Valasek explains.
"This new App Container provides the operating system with a way to make more
fine-grained decisions on what actions certain applications can perform, instead
of relying on the more broad ''Integrity Levels'' that debuted in Windows Vista/7.
Windows 8 also comes with a new version of Internet Explorer, Microsoft''s browser
software. Internet Explorer 10 will come with a mode that disables support for
third-party plug-ins such as Flash and Java.
'
- 'Ransomware, a threat to internet users that continues to grow in popularity with
cyber criminals due to its success and monetary potential. This is nothing new
and to be expected. I have noticed many discussions on underground hacking forums
about "How to create Ransomware like Cryptolocker malware" or "Malware - hacking
tool-kit with ransomware features".
Security intelligence provider, IntelCrawler has discovered a new ransomware variant
called Locker that demands $150 (£92) to restore files that it has encrypted.
Like Cryptolocker, this new ransomware is also nasty because infected users are
in danger of losing their personal files forever.
Locker mainly spreads by drive-by downloads from compromised websites, disguised
itself as MP3 files and use system software vulnerabilities to infect the end
user.
Once it has infected a system, malware first checks the infected machine has an
internet connection or not. Then it deletes any original files from the victim''s
computer after using AES-CTR for encrypting the files on infected devices and
add ". perfect" extension to them.
Locker''s encryption is based on an open source tool called ''TurboPower LockBox''
library. After encrypting all files, the malware place a "CONTACT.TXT" file in
each directory, which provides contact details of the author to buy the decryption
key and once the ransom is paid, each victim gets a key to unscramble files.
The good news is that the researchers are working on the universal decryption
software in order to help the victims. "It appears that the hackers are simply
comparing the list of infected IP addresses of users, along with their host names,"
according IntelCrawler.
IntelCrawler had discovered 50 different builds of the malware, which are being
sold in underground markets for pay-per install programs. One builds had just
under 6,000 infected machines. ZdNet reported.
Malware will encrypt all drives visible on an infected system, so you must be
sure that your backups are stored remotely or in a location that is not simply
another drive partition or mapping to another location.
The malware infects users from the United States, Turkey, Russia, Germany and
the Netherlands. Users should remain vigilant about their security. Please double
check the legitimacy of links received in emails and ensure you have your antivirus
up to date to help protect against such threats.
'
- source_sentence: 'Security Event : Hack In Paris (16-17 June, 2011)
Hack In Paris is an international and corporate security event that will take
place in Disneyland Paris® fromJune 16th to 17th of 2011. Please refer to the
homepage to get up-to-date information about the event.
Topics
The following list contains major topics the conference will cover. Please consider
submitting even if the subject of your research is not listed here.
Advances in reverse engineering
Vulnerability research and exploitation
Penetration testing and security assessment
Malware analysis and new trends in malicous codes
Forensics, IT crime & law enforcement
Privacy issues: LOPPSI, HADOPI, …
Low-level hacking (console security & mobile devices)
Risk management and ISO 27001
Dates
January 20: CFP announced
March 30: Submission deadline
April 15: Notification sent to authors
April 17: Program announcement
June 16-17: Hack In Paris
June 18: Nuit du Hack
More Information: https://hackinparis.com
'
sentences:
- 'It''s just two weeks into the Trump presidency, but his decisions have caused
utter chaos around the country.
One such order signed by the president was banning both refugees and visa holders
from seven Muslim-majority countries (Iraq, Iran, Libya, Yemen, Somalia, Syria,
and Sudan) from entering the United States, resulting in unexpectedly arrest of
some travelers at airports.
Now, it seems like some anti-Trump protesters have publically declared their fight
against the president by exploiting a known flaw in low power FM (LPFM) radio
transmitters to play a song the radio stations didn''t intend to broadcast.
Radio stations in South Carolina, Indiana, Texas, Tennessee and Kentucky, were
hacked recently to broadcast the Bompton-based rapper YG and Nipsey Hussle''s
anti-Trump song "Fuck Donald Trump," which was already a radio hit in some parts
of the country last year, several sources report.
The song was repeatedly played on Monday night, according to the RadioInsight,
and the news of the incident began emerging shortly after Trump''s inauguration
on January 20, eight days before hackers hacked 70 percent of the police CCTV
cameras in Washington DC.
Hackers gained access to the radio stations by exploiting known vulnerabilities
in Barix Exstreamer devices which can decode audio file formats and send them
along for LPFM transmission.
Over a dozen radio stations experienced the hack in recent weeks, though some
of them shut down their airwaves as quickly as possible in an attempt to avoid
playing the inflammatory "FDT (Fuck Donald Trump)" song on loop.
The hackers or group of hackers behind the cyber attack is still unknown. The
affected stations so far include:
105.9 WFBS-LP Salem, S.C.
Radio 810 WMGC/96.7 W244CW Murfreesboro TN
101.9 Pirate Seattle
100.9 WCHQ-LP Louisville
100.5 KCGF-LP San Angelo TX
However, there are unconfirmed reports from radio stations in California, Indiana,
and Washington State that are believed to be affected as well.
Has any of the radio stations you listen to been hit by the hackers? Let us know
in the comments!
'
- 'Google is going to shut down its social media network Google+ after the company
suffered a massive data breach that exposed the private data of hundreds of thousands
of Google Plus users to third-party developers.
According to the tech giant, a security vulnerability in one of Google+''s People
APIs allowed third-party developers to access data for more than 500,000 users,
including their usernames, email addresses, occupation, date of birth, profile
photos, and gender-related information.
Since Google+ servers do not keep API logs for more than two weeks, the company
cannot confirm the number of users impacted by the vulnerability.
However, Google assured its users that the company found no evidence that any
developer was aware of this bug, or that the profile data was misused by any of
the 438 developers that could have had access.
"However, we ran a detailed analysis over the two weeks prior to patching the
bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were
potentially affected. Our analysis showed that up to 438 applications may have
used this API," Google said in blog post published today.
The vulnerability was open since 2015 and fixed after Google discovered it in
March 2018, but the company chose not to disclose the breach to the public—at
the time when Facebook was being roasted for Cambridge Analytica scandal.
Though Google has not revealed the technical details of the security vulnerability,
the nature of the flaw seems to be something very similar to Facebook API flaw
that recently allowed unauthorized developers to access private data from Facebook
users.
Besides admitting the security breach, Google also announced that the company
is shutting down its social media network, acknowledging that Google+ failed to
gain broad adoption or significant traction with consumers.
"The consumer version of Google+ currently has low usage and engagement: 90 percent
of Google+ user sessions are less than five seconds," Google said.
In response, the company has decided to shut down Google+ for consumers by the
end of August 2019. However, Google+ will continue as a product for Enterprise
users.
Google Introduces New Privacy Controls Over Third-Party App Permissions
As part of its "Project Strobe," Google engineers also reviewed third-party developer
access to Google account and Android device data; and has accordingly now introduced
some new privacy controls.
When a third-party app prompts users for access to their Google account data,
clicking "Allow" button approves all requested permissions at once, leaving an
opportunity for malicious apps to trick users into giving away powerful permissions.
But now Google has updated its Account Permissions system that asks for each requested
permission individually rather than all at once, giving users more control over
what type of account data they choose to share with each app.
Since APIs can also allow developers to access users'' extremely sensitive data,
like that of Gmail account, Google has limited access to Gmail API only for apps
that directly enhance email functionality—such as email clients, email backup
services and productivity services.
Google shares fell over 2 percent to $1134.23 after the data breach reports.
'
- 'Security Event : Hack In Paris (16-17 June, 2011)
Hack In Paris is an international and corporate security event that will take
place in Disneyland Paris® fromJune 16th to 17th of 2011. Please refer to the
homepage to get up-to-date information about the event.
Topics
The following list contains major topics the conference will cover. Please consider
submitting even if the subject of your research is not listed here.
Advances in reverse engineering
Vulnerability research and exploitation
Penetration testing and security assessment
Malware analysis and new trends in malicous codes
Forensics, IT crime & law enforcement
Privacy issues: LOPPSI, HADOPI, …
Low-level hacking (console security & mobile devices)
Risk management and ISO 27001
Dates
January 20: CFP announced
March 30: Submission deadline
April 15: Notification sent to authors
April 17: Program announcement
June 16-17: Hack In Paris
June 18: Nuit du Hack
More Information: https://hackinparis.com
'
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sgadagin/fine_tuned_sbert")
# Run inference
sentences = [
'Security Event : Hack In Paris (16-17 June, 2011)\n\n\nHack In Paris is an international and corporate security event that will take place in Disneyland Paris® fromJune 16th to 17th of 2011. Please refer to the homepage to get up-to-date information about the event.\n\nTopics\nThe following list contains major topics the conference will cover. Please consider submitting even if the subject of your research is not listed here.\nAdvances in reverse engineering\nVulnerability research and exploitation\nPenetration testing and security assessment\nMalware analysis and new trends in malicous codes\nForensics, IT crime & law enforcement\nPrivacy issues: LOPPSI, HADOPI, …\nLow-level hacking (console security & mobile devices)\nRisk management and ISO 27001\nDates\nJanuary 20: CFP announced\nMarch 30: Submission deadline\nApril 15: Notification sent to authors\nApril 17: Program announcement\nJune 16-17: Hack In Paris\nJune 18: Nuit du Hack\nMore Information: https://hackinparis.com\n\n',
'Security Event : Hack In Paris (16-17 June, 2011)\n\n\nHack In Paris is an international and corporate security event that will take place in Disneyland Paris® fromJune 16th to 17th of 2011. Please refer to the homepage to get up-to-date information about the event.\n\nTopics\nThe following list contains major topics the conference will cover. Please consider submitting even if the subject of your research is not listed here.\nAdvances in reverse engineering\nVulnerability research and exploitation\nPenetration testing and security assessment\nMalware analysis and new trends in malicous codes\nForensics, IT crime & law enforcement\nPrivacy issues: LOPPSI, HADOPI, …\nLow-level hacking (console security & mobile devices)\nRisk management and ISO 27001\nDates\nJanuary 20: CFP announced\nMarch 30: Submission deadline\nApril 15: Notification sent to authors\nApril 17: Program announcement\nJune 16-17: Hack In Paris\nJune 18: Nuit du Hack\nMore Information: https://hackinparis.com\n\n',
'Google is going to shut down its social media network Google+ after the company suffered a massive data breach that exposed the private data of hundreds of thousands of Google Plus users to third-party developers.\n\nAccording to the tech giant, a security vulnerability in one of Google+\'s People APIs allowed third-party developers to access data for more than 500,000 users, including their usernames, email addresses, occupation, date of birth, profile photos, and gender-related information.\n\nSince Google+ servers do not keep API logs for more than two weeks, the company cannot confirm the number of users impacted by the vulnerability.\n\nHowever, Google assured its users that the company found no evidence that any developer was aware of this bug, or that the profile data was misused by any of the 438 developers that could have had access.\n"However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API," Google said in blog post published today.\nThe vulnerability was open since 2015 and fixed after Google discovered it in March 2018, but the company chose not to disclose the breach to the public—at the time when Facebook was being roasted for Cambridge Analytica scandal.\n\nThough Google has not revealed the technical details of the security vulnerability, the nature of the flaw seems to be something very similar to Facebook API flaw that recently allowed unauthorized developers to access private data from Facebook users.\n\nBesides admitting the security breach, Google also announced that the company is shutting down its social media network, acknowledging that Google+ failed to gain broad adoption or significant traction with consumers.\n"The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds," Google said.\nIn response, the company has decided to shut down Google+ for consumers by the end of August 2019. However, Google+ will continue as a product for Enterprise users.\n\nGoogle Introduces New Privacy Controls Over Third-Party App Permissions\n\nAs part of its "Project Strobe," Google engineers also reviewed third-party developer access to Google account and Android device data; and has accordingly now introduced some new privacy controls.\n\nWhen a third-party app prompts users for access to their Google account data, clicking "Allow" button approves all requested permissions at once, leaving an opportunity for malicious apps to trick users into giving away powerful permissions.\nBut now Google has updated its Account Permissions system that asks for each requested permission individually rather than all at once, giving users more control over what type of account data they choose to share with each app.\n\nSince APIs can also allow developers to access users\' extremely sensitive data, like that of Gmail account, Google has limited access to Gmail API only for apps that directly enhance email functionality—such as email clients, email backup services and productivity services.\n\nGoogle shares fell over 2 percent to $1134.23 after the data breach reports.\n\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,742 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 37 tokens</li><li>mean: 252.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 252.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>0: ~35.20%</li><li>1: ~10.30%</li><li>2: ~17.90%</li><li>3: ~36.60%</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>U.S. online fashion retailer SHEIN has admitted that the company has suffered a significant data breach after unknown hackers stole personally identifiable information (PII) of almost 6.5 million customers.<br><br>Based in North Brunswick and founded in 2008, SHEIN has become one of the largest online fashion retailers that ships to more than 80 countries worldwide. The site has been initially designed to produce "affordable" and trendy fashion clothing for women.<br><br>SHEIN revealed last weekend that its servers had been targeted by a "concerted criminal cyber-attack" that began in June this year and lasted until August 22, when the company was finally made aware of the potential theft.<br><br>Soon after that, the company scanned its servers to remove all possible backdoored entry points, leveraging which hackers could again infiltrate the servers. SHEIN assured its customers that the website is now safe to visit.<br><br>Hackers Stole Over 6.42 Million SHEIN Customers' Data<br><br>Although details about the inci...</code> | <code>U.S. online fashion retailer SHEIN has admitted that the company has suffered a significant data breach after unknown hackers stole personally identifiable information (PII) of almost 6.5 million customers.<br><br>Based in North Brunswick and founded in 2008, SHEIN has become one of the largest online fashion retailers that ships to more than 80 countries worldwide. The site has been initially designed to produce "affordable" and trendy fashion clothing for women.<br><br>SHEIN revealed last weekend that its servers had been targeted by a "concerted criminal cyber-attack" that began in June this year and lasted until August 22, when the company was finally made aware of the potential theft.<br><br>Soon after that, the company scanned its servers to remove all possible backdoored entry points, leveraging which hackers could again infiltrate the servers. SHEIN assured its customers that the website is now safe to visit.<br><br>Hackers Stole Over 6.42 Million SHEIN Customers' Data<br><br>Although details about the inci...</code> | <code>1</code> |
| <code>A location based Social Networking platform with 45 million users,'Foursquare' was vulnerable to the primary email address disclosed.<br><br>Foursquare is a Smartphone application that gives you details of nearby cafes, bars, shops, parks using GPS location and also tells about your friends nearby.<br><br>According to a Penetration tester and hacker 'Jamal Eddine', an attacker can extract email addresses of all 45 million users just by using a few lines of scripting tool.<br><br>Basically the flaw exists in the Invitation system of the Foursquare app. While testing the app, he found that invitation received on the recipient's end actually disclosing the sender's email address, as shown above.<br><br>Invitation URL:<br>https://foursquare.com/mehdi?action=acceptFriendship&expires=1378920415&src=wtbfe&uid=64761059&sig=mmlx96RwGrQ2fJAg4OWZhAWnDvc%3D<br>Where 'uid' parameter represents the sender's profile ID.<br><br>Hacker noticed that the parameter in the Invitation URL can be modified in order to spoof the sender profile i...</code> | <code>A location based Social Networking platform with 45 million users,'Foursquare' was vulnerable to the primary email address disclosed.<br><br>Foursquare is a Smartphone application that gives you details of nearby cafes, bars, shops, parks using GPS location and also tells about your friends nearby.<br><br>According to a Penetration tester and hacker 'Jamal Eddine', an attacker can extract email addresses of all 45 million users just by using a few lines of scripting tool.<br><br>Basically the flaw exists in the Invitation system of the Foursquare app. While testing the app, he found that invitation received on the recipient's end actually disclosing the sender's email address, as shown above.<br><br>Invitation URL:<br>https://foursquare.com/mehdi?action=acceptFriendship&expires=1378920415&src=wtbfe&uid=64761059&sig=mmlx96RwGrQ2fJAg4OWZhAWnDvc%3D<br>Where 'uid' parameter represents the sender's profile ID.<br><br>Hacker noticed that the parameter in the Invitation URL can be modified in order to spoof the sender profile i...</code> | <code>1</code> |
| <code>Earlier this week Dropbox team unveiled details of three critical vulnerabilities in Apple macOS operating system, which altogether could allow a remote attacker to execute malicious code on a targeted Mac computer just by convincing a victim into visiting a malicious web page.<br><br>The reported vulnerabilities were originally discovered by Syndis, a cybersecurity firm hired by Dropbox to conduct simulated penetration testing attacks as Red Team on the company's IT infrastructure, including Apple software used by Dropbox.<br><br>The vulnerabilities were discovered and disclosed to Apple security team in February this year, which were then patched by Apple just over one month later with the release of its March security updates. DropBox applauded Apple for its quick response to its bug report.<br><br>According to DropBox, the vulnerabilities discovered by Syndis didn't just affect its macOS fleet, but also affected all Safari users running the latest version of the web browser and operating system at t...</code> | <code>Earlier this week Dropbox team unveiled details of three critical vulnerabilities in Apple macOS operating system, which altogether could allow a remote attacker to execute malicious code on a targeted Mac computer just by convincing a victim into visiting a malicious web page.<br><br>The reported vulnerabilities were originally discovered by Syndis, a cybersecurity firm hired by Dropbox to conduct simulated penetration testing attacks as Red Team on the company's IT infrastructure, including Apple software used by Dropbox.<br><br>The vulnerabilities were discovered and disclosed to Apple security team in February this year, which were then patched by Apple just over one month later with the release of its March security updates. DropBox applauded Apple for its quick response to its bug report.<br><br>According to DropBox, the vulnerabilities discovered by Syndis didn't just affect its macOS fleet, but also affected all Safari users running the latest version of the web browser and operating system at t...</code> | <code>3</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 1.0684 | 500 | 1.2186 |
| 2.1368 | 1000 | 1.145 |
### Framework Versions
- Python: 3.12.9
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sgadagin/fine_tuned_sbert")
# Run inference
sentences = [
'Security Event : Hack In Paris (16-17 June, 2011)\n\n\nHack In Paris is an international and corporate security event that will take place in Disneyland Paris® fromJune 16th to 17th of 2011. Please refer to the homepage to get up-to-date information about the event.\n\nTopics\nThe following list contains major topics the conference will cover. Please consider submitting even if the subject of your research is not listed here.\nAdvances in reverse engineering\nVulnerability research and exploitation\nPenetration testing and security assessment\nMalware analysis and new trends in malicous codes\nForensics, IT crime & law enforcement\nPrivacy issues: LOPPSI, HADOPI, …\nLow-level hacking (console security & mobile devices)\nRisk management and ISO 27001\nDates\nJanuary 20: CFP announced\nMarch 30: Submission deadline\nApril 15: Notification sent to authors\nApril 17: Program announcement\nJune 16-17: Hack In Paris\nJune 18: Nuit du Hack\nMore Information: https://hackinparis.com\n\n',
'Security Event : Hack In Paris (16-17 June, 2011)\n\n\nHack In Paris is an international and corporate security event that will take place in Disneyland Paris® fromJune 16th to 17th of 2011. Please refer to the homepage to get up-to-date information about the event.\n\nTopics\nThe following list contains major topics the conference will cover. Please consider submitting even if the subject of your research is not listed here.\nAdvances in reverse engineering\nVulnerability research and exploitation\nPenetration testing and security assessment\nMalware analysis and new trends in malicous codes\nForensics, IT crime & law enforcement\nPrivacy issues: LOPPSI, HADOPI, …\nLow-level hacking (console security & mobile devices)\nRisk management and ISO 27001\nDates\nJanuary 20: CFP announced\nMarch 30: Submission deadline\nApril 15: Notification sent to authors\nApril 17: Program announcement\nJune 16-17: Hack In Paris\nJune 18: Nuit du Hack\nMore Information: https://hackinparis.com\n\n',
'Google is going to shut down its social media network Google+ after the company suffered a massive data breach that exposed the private data of hundreds of thousands of Google Plus users to third-party developers.\n\nAccording to the tech giant, a security vulnerability in one of Google+\'s People APIs allowed third-party developers to access data for more than 500,000 users, including their usernames, email addresses, occupation, date of birth, profile photos, and gender-related information.\n\nSince Google+ servers do not keep API logs for more than two weeks, the company cannot confirm the number of users impacted by the vulnerability.\n\nHowever, Google assured its users that the company found no evidence that any developer was aware of this bug, or that the profile data was misused by any of the 438 developers that could have had access.\n"However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API," Google said in blog post published today.\nThe vulnerability was open since 2015 and fixed after Google discovered it in March 2018, but the company chose not to disclose the breach to the public—at the time when Facebook was being roasted for Cambridge Analytica scandal.\n\nThough Google has not revealed the technical details of the security vulnerability, the nature of the flaw seems to be something very similar to Facebook API flaw that recently allowed unauthorized developers to access private data from Facebook users.\n\nBesides admitting the security breach, Google also announced that the company is shutting down its social media network, acknowledging that Google+ failed to gain broad adoption or significant traction with consumers.\n"The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds," Google said.\nIn response, the company has decided to shut down Google+ for consumers by the end of August 2019. However, Google+ will continue as a product for Enterprise users.\n\nGoogle Introduces New Privacy Controls Over Third-Party App Permissions\n\nAs part of its "Project Strobe," Google engineers also reviewed third-party developer access to Google account and Android device data; and has accordingly now introduced some new privacy controls.\n\nWhen a third-party app prompts users for access to their Google account data, clicking "Allow" button approves all requested permissions at once, leaving an opportunity for malicious apps to trick users into giving away powerful permissions.\nBut now Google has updated its Account Permissions system that asks for each requested permission individually rather than all at once, giving users more control over what type of account data they choose to share with each app.\n\nSince APIs can also allow developers to access users\' extremely sensitive data, like that of Gmail account, Google has limited access to Gmail API only for apps that directly enhance email functionality—such as email clients, email backup services and productivity services.\n\nGoogle shares fell over 2 percent to $1134.23 after the data breach reports.\n\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,742 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 37 tokens</li><li>mean: 252.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 252.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>0: ~35.20%</li><li>1: ~10.30%</li><li>2: ~17.90%</li><li>3: ~36.60%</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>U.S. online fashion retailer SHEIN has admitted that the company has suffered a significant data breach after unknown hackers stole personally identifiable information (PII) of almost 6.5 million customers.<br><br>Based in North Brunswick and founded in 2008, SHEIN has become one of the largest online fashion retailers that ships to more than 80 countries worldwide. The site has been initially designed to produce "affordable" and trendy fashion clothing for women.<br><br>SHEIN revealed last weekend that its servers had been targeted by a "concerted criminal cyber-attack" that began in June this year and lasted until August 22, when the company was finally made aware of the potential theft.<br><br>Soon after that, the company scanned its servers to remove all possible backdoored entry points, leveraging which hackers could again infiltrate the servers. SHEIN assured its customers that the website is now safe to visit.<br><br>Hackers Stole Over 6.42 Million SHEIN Customers' Data<br><br>Although details about the inci...</code> | <code>U.S. online fashion retailer SHEIN has admitted that the company has suffered a significant data breach after unknown hackers stole personally identifiable information (PII) of almost 6.5 million customers.<br><br>Based in North Brunswick and founded in 2008, SHEIN has become one of the largest online fashion retailers that ships to more than 80 countries worldwide. The site has been initially designed to produce "affordable" and trendy fashion clothing for women.<br><br>SHEIN revealed last weekend that its servers had been targeted by a "concerted criminal cyber-attack" that began in June this year and lasted until August 22, when the company was finally made aware of the potential theft.<br><br>Soon after that, the company scanned its servers to remove all possible backdoored entry points, leveraging which hackers could again infiltrate the servers. SHEIN assured its customers that the website is now safe to visit.<br><br>Hackers Stole Over 6.42 Million SHEIN Customers' Data<br><br>Although details about the inci...</code> | <code>1</code> |
| <code>A location based Social Networking platform with 45 million users,'Foursquare' was vulnerable to the primary email address disclosed.<br><br>Foursquare is a Smartphone application that gives you details of nearby cafes, bars, shops, parks using GPS location and also tells about your friends nearby.<br><br>According to a Penetration tester and hacker 'Jamal Eddine', an attacker can extract email addresses of all 45 million users just by using a few lines of scripting tool.<br><br>Basically the flaw exists in the Invitation system of the Foursquare app. While testing the app, he found that invitation received on the recipient's end actually disclosing the sender's email address, as shown above.<br><br>Invitation URL:<br>https://foursquare.com/mehdi?action=acceptFriendship&expires=1378920415&src=wtbfe&uid=64761059&sig=mmlx96RwGrQ2fJAg4OWZhAWnDvc%3D<br>Where 'uid' parameter represents the sender's profile ID.<br><br>Hacker noticed that the parameter in the Invitation URL can be modified in order to spoof the sender profile i...</code> | <code>A location based Social Networking platform with 45 million users,'Foursquare' was vulnerable to the primary email address disclosed.<br><br>Foursquare is a Smartphone application that gives you details of nearby cafes, bars, shops, parks using GPS location and also tells about your friends nearby.<br><br>According to a Penetration tester and hacker 'Jamal Eddine', an attacker can extract email addresses of all 45 million users just by using a few lines of scripting tool.<br><br>Basically the flaw exists in the Invitation system of the Foursquare app. While testing the app, he found that invitation received on the recipient's end actually disclosing the sender's email address, as shown above.<br><br>Invitation URL:<br>https://foursquare.com/mehdi?action=acceptFriendship&expires=1378920415&src=wtbfe&uid=64761059&sig=mmlx96RwGrQ2fJAg4OWZhAWnDvc%3D<br>Where 'uid' parameter represents the sender's profile ID.<br><br>Hacker noticed that the parameter in the Invitation URL can be modified in order to spoof the sender profile i...</code> | <code>1</code> |
| <code>Earlier this week Dropbox team unveiled details of three critical vulnerabilities in Apple macOS operating system, which altogether could allow a remote attacker to execute malicious code on a targeted Mac computer just by convincing a victim into visiting a malicious web page.<br><br>The reported vulnerabilities were originally discovered by Syndis, a cybersecurity firm hired by Dropbox to conduct simulated penetration testing attacks as Red Team on the company's IT infrastructure, including Apple software used by Dropbox.<br><br>The vulnerabilities were discovered and disclosed to Apple security team in February this year, which were then patched by Apple just over one month later with the release of its March security updates. DropBox applauded Apple for its quick response to its bug report.<br><br>According to DropBox, the vulnerabilities discovered by Syndis didn't just affect its macOS fleet, but also affected all Safari users running the latest version of the web browser and operating system at t...</code> | <code>Earlier this week Dropbox team unveiled details of three critical vulnerabilities in Apple macOS operating system, which altogether could allow a remote attacker to execute malicious code on a targeted Mac computer just by convincing a victim into visiting a malicious web page.<br><br>The reported vulnerabilities were originally discovered by Syndis, a cybersecurity firm hired by Dropbox to conduct simulated penetration testing attacks as Red Team on the company's IT infrastructure, including Apple software used by Dropbox.<br><br>The vulnerabilities were discovered and disclosed to Apple security team in February this year, which were then patched by Apple just over one month later with the release of its March security updates. DropBox applauded Apple for its quick response to its bug report.<br><br>According to DropBox, the vulnerabilities discovered by Syndis didn't just affect its macOS fleet, but also affected all Safari users running the latest version of the web browser and operating system at t...</code> | <code>3</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 1.0684 | 500 | 1.2186 |
| 2.1368 | 1000 | 1.145 |
### Framework Versions
- Python: 3.12.9
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:3742", "loss:SoftmaxLoss"], "widget": [{"source_sentence": "As the year draws to a close, we have seen the number of emerging threats like advance phishing attacks from the Syrian Electronic Army, financial malware and exploit kits, Cryptolocker ransomware infections, massive Bitcoin theft, extensive privacy breach from NSA and many more.\n\nThe financial malware's were the most popular threat this year. Money is always a perfect motivation for attackers and cyber criminals who are continually targeting financial institutions.\nOn Tuesday, Antivirus firm Symantec has released a Threat report, called \"The State of Financial Trojans: 2013\", which revealed that over 1,400 financial institutions have been targeted and compromised millions of computers around the globe and the most targeted banks are in the US with 71.5% of all analyzed Trojans.\n\nFinancial institutions have been fighting against malware for the last ten years to protect their customers and online transactions from threat. Over the time the attackers adapted to these countermeasures and sophisticated banking Trojans began to emerge.\n\nAccording to the report, the number of infections of the most common financial Trojans grew to 337 percent in the first nine months of 2013. Nearly 1,500 institutions in 88 countries were potential targets during 2013.\nThe financial fraud marketplace is also increasingly organized and Cyber criminals are using advanced Trojans to commit large scale attacks.\nAttackers of all skill levels can enter the arena of financial fraud, as the underground marketplace is a service industry that provides an abundance of resources. Those who lack expertise can simply purchase what they need. For as little as $100, an attacker can avail of a leaked Zeus or Spyeye equipped with Web-injects.\nThe modern financial Trojan is extremely flexible, supporting a range of functionality designed to facilitate fraudulent transactions across a variety of services.\n\nTwo dominant attack strategies are:\nFocused attack: This approach suits attackers with limited resources but also scales well to larger operations. If the distribution is accurate and the target institution has a sizeable client base, a focused attack can provide an adequate supply of targets. Shylock, Bebloh and Tilon all use this approach exclusively.\nBroad strokes: In this attack strategy, Trojans are set to target large numbers of institutions. Tilon, Cridex, and Gameover adopt these tactics and Zeus also uses this approach in its default configuration.\nAccording to Symantec, the main reason for the surge is weak authentication practices:\n\nUnfortunately, in many situations, security implementations adopted by financial institutions are inadequate to defend against the modern financial Trojan. Institutions are starting to adopt strong security measures like chipTAN, but the adoption rate is slow. Institutions that persist with weaker security measures will continue to be exploited by attackers.\nThey need to maintain constant vigilance, apply software updates, maintain an awareness of new threats and deploy complementary security solutions that can defend against evolving malware attacks.\n\n", "sentences": ["As the year draws to a close, we have seen the number of emerging threats like advance phishing attacks from the Syrian Electronic Army, financial malware and exploit kits, Cryptolocker ransomware infections, massive Bitcoin theft, extensive privacy breach from NSA and many more.\n\nThe financial malware's were the most popular threat this year. Money is always a perfect motivation for attackers and cyber criminals who are continually targeting financial institutions.\nOn Tuesday, Antivirus firm Symantec has released a Threat report, called \"The State of Financial Trojans: 2013\", which revealed that over 1,400 financial institutions have been targeted and compromised millions of computers around the globe and the most targeted banks are in the US with 71.5% of all analyzed Trojans.\n\nFinancial institutions have been fighting against malware for the last ten years to protect their customers and online transactions from threat. Over the time the attackers adapted to these countermeasures and sophisticated banking Trojans began to emerge.\n\nAccording to the report, the number of infections of the most common financial Trojans grew to 337 percent in the first nine months of 2013. Nearly 1,500 institutions in 88 countries were potential targets during 2013.\nThe financial fraud marketplace is also increasingly organized and Cyber criminals are using advanced Trojans to commit large scale attacks.\nAttackers of all skill levels can enter the arena of financial fraud, as the underground marketplace is a service industry that provides an abundance of resources. Those who lack expertise can simply purchase what they need. For as little as $100, an attacker can avail of a leaked Zeus or Spyeye equipped with Web-injects.\nThe modern financial Trojan is extremely flexible, supporting a range of functionality designed to facilitate fraudulent transactions across a variety of services.\n\nTwo dominant attack strategies are:\nFocused attack: This approach suits attackers with limited resources but also scales well to larger operations. If the distribution is accurate and the target institution has a sizeable client base, a focused attack can provide an adequate supply of targets. Shylock, Bebloh and Tilon all use this approach exclusively.\nBroad strokes: In this attack strategy, Trojans are set to target large numbers of institutions. Tilon, Cridex, and Gameover adopt these tactics and Zeus also uses this approach in its default configuration.\nAccording to Symantec, the main reason for the surge is weak authentication practices:\n\nUnfortunately, in many situations, security implementations adopted by financial institutions are inadequate to defend against the modern financial Trojan. Institutions are starting to adopt strong security measures like chipTAN, but the adoption rate is slow. Institutions that persist with weaker security measures will continue to be exploited by attackers.\nThey need to maintain constant vigilance, apply software updates, maintain an awareness of new threats and deploy complementary security solutions that can defend against evolving malware attacks.\n\n", "While Windows users are currently in fear of getting their systems hijacked by the WannaCry ransomware outbreak, Apple users are sitting relaxed, thinking that malware attacks are something that happens to Windows users, and not Apple.\n\nBut you are mistaken – Apple products are also not immune to the hack attacks and malware infections, as an ebook can hack your Mac, iPhone, and iPad.\n\nApple on Monday pushed out software updates for iOS, macOS, Safari, tvOS, iCloud, iTunes, and watchOS to fix a total of 67 unique security vulnerabilities, many of which allows attackers to perform remote code execution on an affected system.\n\niOS is 10.3.2 for iPhone, iPad, and iPod\n\nApple's mobile operating system iOS 10.3.2 for the iPhone, iPad and iPod touch addresses 41 security flaws, 23 of which resides in WebKit, including 17 remote code execution and 5 cross-site scripting (XSS) vulnerabilities.\n\nBesides this, iOS 10.3.2 also addresses a pair of flaws in iBooks for iOS (CVE-2017-2497, CVE-2017-6981) that could allow e-books to open arbitrary websites and execute malicious code with root privileges.\n\nOther flaws addressed in iOS 10.3.2 include a memory corruption issue in AVE Video Encoder that could allow a malicious application to gain kernel-level privileges, and a certificate validation issue in the certificate trust policy for handling of untrusted certificates.\n\nApple users can install iOS 10.3.2 by connecting their iOS devices to iTunes or downloading it directly by going to the Settings → General → Software Update.\n\nmacOS Sierra 10.12.5 for El Capitan and Yosemite\n\nApple's Mac operating system macOS Sierra 10.12.5 addresses a total of 37 vulnerabilities, including a pair of bugs in iBook that allow the execution of arbitrary code with root privileges, and a separate bug in iBook that allows an application to escape its secure sandbox.\n\nOther flaws addressed in macOS Sierra 10.12.5 include a Wi-Fi networking issue that allows the theft of network credentials, elevation of privilege bugs in both the Intel and Nvidia graphics drivers, and four different arbitrary code execution flaws in SQLite.\n\nMac users can download the update through the App Store → Updates. Alternatively, macOS Sierra users can be download Sierra 10.12.5 as a stand-alone update, OS X El Capitan users can download the update here, and OS X Yosemite users can get the security update here.\n\nSafari 10.1.1 for Apple Browser\n\nSafari 10.1.1 addresses a total of 26 security issues, 23 of which resides in WebKit, many of which are also patched in the iOS 10.3.2.\n\nRest three vulnerabilities are patched in the Safari browser itself.\n\nThe Safari 10.1.1 update can be downloaded by going to the App Store → Updates on El Capitan and Yosemite systems.\n\nwatchOS 3.2.2 for Apple Watch\n\nApple Watch users should install watchOS 3.2.2 that patches a total of 12 security vulnerabilities, four of which could be used by attackers to execute remote code execution on the affected device.\n\nUsers of Apple Watch can download watchOS 3.2.2 by connecting their watch to its charger, and opening the Apple Watch app → My Watch tab → General → Software Update on their iPhone.\n\ntvOS 10.2.1 for Apple TV\n\nApple has also released tvOS 10.2.1 to patch a total of 23 vulnerabilities, 12 of which resides in WebKit engine that could allow an attacker to perform cross-site scripting and remote code execution attacks on a target device.\n\nThe tvOS 10.2.1 update can be downloaded directly from the Apple TV by going to Settings → System → Update Software.\n\niTunes 12.6.1 for Windows and iCloud for Windows 6.2.1\n\nMeanwhile, Apple also released patches for Windows users using iTunes and iCloud. Both iTunes 12.6.1 and iCloud 6.2.1 patches a single remote code execution bug in WebKit for Windows 7 and later.\n\nApple users are recommended to update all their operating systems for Apple products and Safari as soon as possible before cyber criminals exploited them. Patches are available through automatic updates.\n\n", "A really bad year for the world's second-largest email service provider, Yahoo Mail! The company announced today, 'we identified a coordinated effort to gain unauthorized access to Yahoo Mail accounts', user names and passwords of its email customers have been stolen and are used to access multiple accounts.\n\nYahoo did not say how many accounts have been affected, and neither they are sure about the source of the leaked users' credentials. It appears to have come from a third party database being compromised, and not an infiltration of Yahoo's own servers.\n\n\"We have no evidence that they were obtained directly from Yahoo's systems. Our ongoing investigation shows that malicious computer software used the list of usernames and passwords to access Yahoo Mail accounts. The information sought in the attack seems to be names and email addresses from the affected accounts' most recent sent emails.\"\n\nFor now, Yahoo is taking proactive actions to protect their affected users, \"We are resetting passwords on impacted accounts and we are using second sign-in verification to allow users to re-secure their accounts. Impacted users will be prompted (if not, already) to change their password and may receive an email notification or an SMS text if they have added a mobile number to their account.\"\n\nPeople frequently use the same passwords on multiple accounts, so possibly hackers are brute-forcing Yahoo accounts with the user credentials stolen from other data breaches.\n\nYahoo users can prevent account hijacks by using a strong and unique password. You can use 'Random strong password generator' feature of DuckDuckGo search engine to get a unique & strong password.\n\nUsers are also recommended to enable two-factor authentication, which requires a code texted to the legitimate user's mobile phone whenever a login attempt is made from a new computer.\n\nYahoo! was hacked in July 2012, with attackers stealing 450,000 email addresses and passwords from a Yahoo! contributor network.\n\nReaders can also download two free Whitepaper related to the Email and account security:\nCloud-Based Email Archiving\nEmail Data Loss Prevention\nWell, Yahoo is now working with federal law enforcement as a part of its investigation.\n\n"]}, {"source_sentence": "Security researchers have spotted a new malware campaign in the wild that spreads an advanced botnet malware by leveraging at least three recently disclosed vulnerabilities in Microsoft Office.\n\nDubbed Zyklon, the fully-featured malware has resurfaced after almost two years and primarily found targeting telecommunications, insurance and financial services.\n\nActive since early 2016, Zyklon is an HTTP botnet malware that communicates with its command-and-control servers over Tor anonymising network and allows attackers to remotely steal keylogs, sensitive data, like passwords stored in web browsers and email clients.\n\nZyklon malware is also capable of executing additional plugins, including secretly using infected systems for DDoS attacks and cryptocurrency mining.\n\nDifferent versions of the Zyklon malware has previously been found being advertised on a popular underground marketplace for $75 (normal build) and $125 ( Tor-enabled build).\n\nAccording to a recently published report by FireEye, the attackers behind the campaign are leveraging three following vulnerabilities in Microsoft Office that execute a PowerShell script on the targeted computers to download the final payload from its C&C server.\n\n1) .NET Framework RCE Vulnerability (CVE-2017-8759)—this remote code execution vulnerability exists when Microsoft .NET Framework processes untrusted input, allowing an attacker to take control of an affected system by tricking victims into opening a specially crafted malicious document file sent over an email. Microsoft already released a security patch for this flaw in September updates.\n\n2) Microsoft Office RCE Vulnerability (CVE-2017-11882)—it's a 17-year-old memory corruption flaw that Microsoft patched in November patch update allows a remote attacker to execute malicious code on the targeted systems without requiring any user interaction after opening a malicious document.\n\n3) Dynamic Data Exchange Protocol (DDE Exploit)—this technique allows attackers to leverage a built-in feature of Microsoft Office, called DDE, to perform code execution on the targeted device without requiring Macros to be enabled or memory corruption.\n\nAs explained by the researchers, attackers are actively exploiting these three vulnerabilities to deliver Zyklon malware using spear phishing emails, which typically arrives with an attached ZIP file containing a malicious Office doc file.\n\nOnce opened, the malicious doc file equipped with one of these vulnerabilities immediately runs a PowerShell script, which eventually downloads the final payload, i.e., Zyklon HTTP malware, onto the infected computer.\n\"In all these techniques, the same domain is used to download the next level payload (Pause.ps1), which is another PowerShell script that is Base64 encoded,\" the FireEye researchers said.\n\"The Pause.ps1 script is responsible for resolving the APIs required for code injection. It also contains the injectable shellcode.\"\n\"The injected code is responsible for downloading the final payload from the server. The final stage payload is a PE executable compiled with .Net framework.\"\nInterestingly, the PowerShell script connects to a dotless IP address (example: https://3627732942) to download the final payload.\n\nWhat is Dotless IP Address? If you are unaware, dotless IP addresses, sometimes referred as 'Decimal Address,' are decimal values of IPv4 addresses (represented as dotted-quad notation). Almost all modern web browsers resolve decimal IP address to its equivalent IPV4 address when opened with \"https://\" following the decimal value.\n\nFor example, Google's IP address 216.58.207.206 can also be represented as https://3627732942 in decimal values (Try this online converter).\n\nThe best way to protect yourself and your organisation from such malware attacks are always to be suspicious of any uninvited document sent via an email and never click on links inside those documents unless adequately verifying the source.\n\nMost importantly, always keep your software and systems up-to-date, as threat actors incorporate recently discovered, but patched, vulnerabilities in popular software—Microsoft Office, in this case—to increase the potential for successful infections.\n\n", "sentences": ["India-linked highly targeted mobile malware campaign, first unveiled two weeks ago, has been found to be part of a broader campaign targeting multiple platforms, including windows devices and possibly Android as well.\n\nAs reported in our previous article, earlier this month researchers at Talos threat intelligence unit discovered a group of Indian hackers abusing mobile device management (MDM) service to hijack and spy on a few targeted iPhone users in India.\n\nOperating since August 2015, the attackers have been found abusing MDM service to remotely install malicious versions of legitimate apps, including Telegram, WhatsApp, and PrayTime, onto targeted iPhones.\n\nThese modified apps have been designed to secretly spy on iOS users, and steal their real-time location, SMS, contacts, photos and private messages from third-party chatting applications.\n\nDuring their ongoing investigation, Talos researchers identified a new MDM infrastructure and several malicious binaries – designed to target victims running Microsoft Windows operating systems – hosted on the same infrastructure used in previous campaigns.\nIos-update-whatsapp[.]com (new)\nWpitcher[.]com\nIos-certificate-update.com\n\"We know that the MDM and the Windows services were up and running on the same C2 server in May 2018,\" researchers said in a blog post published today.\n\n\"Some of the C2 servers are still up and running at this time. The Apache setup is very specific, and perfectly matched the Apache setup of the malicious IPA apps.\"\n\nPossible Connections with \"Bahamut Hacking Group\"\nBesides this, researchers also found some potential similarities that link this campaign with an old hacking group, dubbed \"Bahamut,\" an advanced threat actor who was previously targeting Android devices using similar MDM technique as used in the latest iOS malware campaign.\n\nThe newly identified MDM infrastructure, which was created in January 2018, and used from January to March of this year, targeted two Indian devices and one located in Qatar with a British phone number.\n\nAccording to the researchers, Bahamut also targeted similar Qatar-based individuals during their Android malware campaign, as detailed by Bellingcat in a blog post.\n\"Bahamut shared a domain name with one of the malicious iOS applications mentioned in our previous post,\" researchers said.\n\n\"The new MDM platform we identified has similar victimology with Middle Eastern targets, namely Qatar, using a U.K. mobile number issued from LycaMobile. Bahamut targeted similar Qatar-based individuals during their campaign.\"\nApart from distributing modified Telegram and WhatsApp apps with malicious functionalities, the newly-identified server also distributes modified versions of Safari browser and IMO video chatting app to steal more personal information on victims.\n\nAttackers Using Malicious Safari Browser to Steal Login Credentials\nAccording to the researchers, the malicious Safari browser has been pre-configured to automatically exfiltrate the username and the password of the users for a variety of other web services, Yahoo, Rediff, Amazon, Google, Reddit, Baidu, ProtonMail, Zoho, Tutanota and more.\n\"The malware continuously monitors a web page, seeking out the HTML form fields that hold the username and password as the user types them in to steal credentials. The names of the inspected HTML fields are embedded into the app alongside the domain names,\" the researchers said.\nThe malicious browser contains three malicious plugins—Add Bookmark, Add To Favourites, and Add to Reading List—that just like the other apps, send stolen data to a remote attacker-controlled server.\n\nAt this time, it's unclear who is behind the campaign, who was targeted in the campaign, and what were the motives behind the attack, but the technical elements suggest the attackers are operating from India, and are well-funded.\n\nResearchers said that those infected with this kind of malware need to enroll their devices, which means \"they should be on the lookout at all times to avoid accidental enrollment.\"\n\nThe best way to avoid being a victim to such attacks is to always download apps from official app store.\n\n", "Security researchers have spotted a new malware campaign in the wild that spreads an advanced botnet malware by leveraging at least three recently disclosed vulnerabilities in Microsoft Office.\n\nDubbed Zyklon, the fully-featured malware has resurfaced after almost two years and primarily found targeting telecommunications, insurance and financial services.\n\nActive since early 2016, Zyklon is an HTTP botnet malware that communicates with its command-and-control servers over Tor anonymising network and allows attackers to remotely steal keylogs, sensitive data, like passwords stored in web browsers and email clients.\n\nZyklon malware is also capable of executing additional plugins, including secretly using infected systems for DDoS attacks and cryptocurrency mining.\n\nDifferent versions of the Zyklon malware has previously been found being advertised on a popular underground marketplace for $75 (normal build) and $125 ( Tor-enabled build).\n\nAccording to a recently published report by FireEye, the attackers behind the campaign are leveraging three following vulnerabilities in Microsoft Office that execute a PowerShell script on the targeted computers to download the final payload from its C&C server.\n\n1) .NET Framework RCE Vulnerability (CVE-2017-8759)—this remote code execution vulnerability exists when Microsoft .NET Framework processes untrusted input, allowing an attacker to take control of an affected system by tricking victims into opening a specially crafted malicious document file sent over an email. Microsoft already released a security patch for this flaw in September updates.\n\n2) Microsoft Office RCE Vulnerability (CVE-2017-11882)—it's a 17-year-old memory corruption flaw that Microsoft patched in November patch update allows a remote attacker to execute malicious code on the targeted systems without requiring any user interaction after opening a malicious document.\n\n3) Dynamic Data Exchange Protocol (DDE Exploit)—this technique allows attackers to leverage a built-in feature of Microsoft Office, called DDE, to perform code execution on the targeted device without requiring Macros to be enabled or memory corruption.\n\nAs explained by the researchers, attackers are actively exploiting these three vulnerabilities to deliver Zyklon malware using spear phishing emails, which typically arrives with an attached ZIP file containing a malicious Office doc file.\n\nOnce opened, the malicious doc file equipped with one of these vulnerabilities immediately runs a PowerShell script, which eventually downloads the final payload, i.e., Zyklon HTTP malware, onto the infected computer.\n\"In all these techniques, the same domain is used to download the next level payload (Pause.ps1), which is another PowerShell script that is Base64 encoded,\" the FireEye researchers said.\n\"The Pause.ps1 script is responsible for resolving the APIs required for code injection. It also contains the injectable shellcode.\"\n\"The injected code is responsible for downloading the final payload from the server. The final stage payload is a PE executable compiled with .Net framework.\"\nInterestingly, the PowerShell script connects to a dotless IP address (example: https://3627732942) to download the final payload.\n\nWhat is Dotless IP Address? If you are unaware, dotless IP addresses, sometimes referred as 'Decimal Address,' are decimal values of IPv4 addresses (represented as dotted-quad notation). Almost all modern web browsers resolve decimal IP address to its equivalent IPV4 address when opened with \"https://\" following the decimal value.\n\nFor example, Google's IP address 216.58.207.206 can also be represented as https://3627732942 in decimal values (Try this online converter).\n\nThe best way to protect yourself and your organisation from such malware attacks are always to be suspicious of any uninvited document sent via an email and never click on links inside those documents unless adequately verifying the source.\n\nMost importantly, always keep your software and systems up-to-date, as threat actors incorporate recently discovered, but patched, vulnerabilities in popular software—Microsoft Office, in this case—to increase the potential for successful infections.\n\n", "Attention WordPress users!\n\nYour website could easily get hacked if you are using \"Ultimate Addons for Beaver Builder,\" or \"Ultimate Addons for Elementor\" and haven't recently updated them to the latest available versions.\n\nSecurity researchers have discovered a critical yet easy-to-exploit authentication bypass vulnerability in both widely-used premium WordPress plugins that could allow remote attackers to gain administrative access to sites without requiring any password.\n\nWhat's more worrisome is that opportunistic attackers have already started exploiting this vulnerability in the wild within 2 days of its discovery in order to compromise vulnerable WordPress websites and install a malicious backdoor for later access.\n\nBoth vulnerable plugins, made by software development company Brainstorm Force, are currently powering over hundreds of thousands of WordPress websites using Elementor and Beaver Builder frameworks, helping website admins and designers extend the functionality of their websites with more widgets, modules, page templates.\n\nDiscovered by researchers at web security service MalCare, the vulnerability resides in the way both plugins let WordPress account holders, including administrators, authenticate via Facebook and Google login mechanisms.\n\nImage credit: WebARX\n\nAccording to the vulnerability's advisory, due to lack of checks in the authentication method when a user login via Facebook or Google, vulnerable plugins can be tricked into allowing malicious users to login as any other targeted user without requiring any password.\n\n\"However, the Facebook and Google authentication methods did not verify the token returned by Facebook and Google, and since they don't require a password, there was no password check,\" explained WebARX researchers, who also analysed the flaw and confirmed its active exploitation.\n\n\"To exploit the vulnerability, the hacker needs to use the email ID of an admin user of the site. In most cases, this information can be retrieved fairly easily,\" MalCare said.\n\nIn an email to The Hacker News, WebARX confirmed that attackers are abusing this flaw to install a fake SEO stats plugin after uploading a tmp.zip file on the targeted WordPress server, which eventually drops a wp-xmlrpc.php backdoor file to the root directory of the vulnerable site.\n\nMalCare discovered this vulnerability on Wednesday that affects below-listed versions of the plugins and reported it to the developers on the same day, who then quickly addressed the issue and released patched versions of both within just 7 hours.\n\n\nUltimate Addons for Elementor <= 1.20.0\nUltimate Addons for Beaver Builder <= 1.24.0\n\n\nThe authentication bypass vulnerability has been patched with the release of \"Ultimate Addons for Elementor version 1.20.1\" and \"Ultimate Addons for Beaver Builder version 1.24.1,\" which affected websites are highly recommended to install as soon as possible.\n\n"]}, {"source_sentence": "Exclusive — If you have not updated your website to the latest WordPress version 5.0.3, it's a brilliant idea to upgrade the content management software of your site now. From now, I mean immediately.\n\nCybersecurity researchers at RIPS Technologies GmbH today shared their latest research with The Hacker News, revealing the existence of a critical remote code execution vulnerability that affects all previous versions of WordPress content management software released in the past 6 years.\n\nThe remote code execution attack, discovered and reported to the WordPress security team late last year, can be exploited by a low privileged attacker with at least an \"author\" account using a combination of two separate vulnerabilities—Path Traversal and Local File Inclusion—that reside in the WordPress core.\n\nThe requirement of at least an author account reduces the severity of this vulnerability to some extent, which could be exploited by a rogue content contributor or an attacker who somehow manages to gain author's credential using phishing, password reuse or other attacks.\n\"An attacker who gains access to an account with at least author privileges on a target WordPress site can execute arbitrary PHP code on the underlying server, leading to a full remote takeover,\" Scannell says.\n\nVideo Demonstration — Here's How the Attack Works\nAccording to Simon Scannell, a researcher at RIPS Technologies GmbH, the attack takes advantage of the way WordPress image management system handles Post Meta entries used to store description, size, creator, and other meta information of uploaded images.\n\nScannell found that a rogue or compromised author account can modify any entries associated with an image and set them to arbitrary values, leading to the Path Traversal vulnerability.\n\"The idea is to set _wp_attached_file to evil.jpg?shell.php, which would lead to an HTTP request being made to the following URL: https://targetserver.com/wp-content/uploads/evil.jpg?shell.php,\" Scannell explains.\nAnd, \"it is still possible to plant the resulting image into any directory by using a payload such as evil.jpg?/../../evil.jpg.\"\nThe Path Traversal flaw in combination with a local file inclusion flaw in theme directory could then allow the attacker to execute arbitrary code on the targeted server.\n\nThe attack, as shown in the proof-of-concept video shared by the researcher, can be executed within seconds to gain complete control over a vulnerable WordPress blog.\n\nAccording to Scannell, the code execution attack became non-exploitable in WordPress versions 5.0.1 and 4.9.9 after patch for another vulnerability was introduced which prevented unauthorized users from setting arbitrary Post Meta entries.\n\nHowever, the Path Traversal flaw is still unpatched even in the latest WordPress version and can be exploited if any installed 3rd-party plugin incorrectly handles Post Meta entries.\n\nScannell confirmed that the next release of WordPress would include a fix to completely address the issue demonstrated by the researcher.\n\n", "sentences": ["Exclusive — If you have not updated your website to the latest WordPress version 5.0.3, it's a brilliant idea to upgrade the content management software of your site now. From now, I mean immediately.\n\nCybersecurity researchers at RIPS Technologies GmbH today shared their latest research with The Hacker News, revealing the existence of a critical remote code execution vulnerability that affects all previous versions of WordPress content management software released in the past 6 years.\n\nThe remote code execution attack, discovered and reported to the WordPress security team late last year, can be exploited by a low privileged attacker with at least an \"author\" account using a combination of two separate vulnerabilities—Path Traversal and Local File Inclusion—that reside in the WordPress core.\n\nThe requirement of at least an author account reduces the severity of this vulnerability to some extent, which could be exploited by a rogue content contributor or an attacker who somehow manages to gain author's credential using phishing, password reuse or other attacks.\n\"An attacker who gains access to an account with at least author privileges on a target WordPress site can execute arbitrary PHP code on the underlying server, leading to a full remote takeover,\" Scannell says.\n\nVideo Demonstration — Here's How the Attack Works\nAccording to Simon Scannell, a researcher at RIPS Technologies GmbH, the attack takes advantage of the way WordPress image management system handles Post Meta entries used to store description, size, creator, and other meta information of uploaded images.\n\nScannell found that a rogue or compromised author account can modify any entries associated with an image and set them to arbitrary values, leading to the Path Traversal vulnerability.\n\"The idea is to set _wp_attached_file to evil.jpg?shell.php, which would lead to an HTTP request being made to the following URL: https://targetserver.com/wp-content/uploads/evil.jpg?shell.php,\" Scannell explains.\nAnd, \"it is still possible to plant the resulting image into any directory by using a payload such as evil.jpg?/../../evil.jpg.\"\nThe Path Traversal flaw in combination with a local file inclusion flaw in theme directory could then allow the attacker to execute arbitrary code on the targeted server.\n\nThe attack, as shown in the proof-of-concept video shared by the researcher, can be executed within seconds to gain complete control over a vulnerable WordPress blog.\n\nAccording to Scannell, the code execution attack became non-exploitable in WordPress versions 5.0.1 and 4.9.9 after patch for another vulnerability was introduced which prevented unauthorized users from setting arbitrary Post Meta entries.\n\nHowever, the Path Traversal flaw is still unpatched even in the latest WordPress version and can be exploited if any installed 3rd-party plugin incorrectly handles Post Meta entries.\n\nScannell confirmed that the next release of WordPress would include a fix to completely address the issue demonstrated by the researcher.\n\n", "Android Security Squad, the China-based group that uncovered a second Android master key vulnerability that might be abused to modify smartphone apps without breaking their digital signatures.\n\nThe whole point of digitally signing a document or file is to prove the file hasn't been modified. The process uses a form of public-key cryptography. In Chinese version of hacking attack, malicious code can be added into the file headers, but the method is limited because targeted files need to be smaller than 64K in size.\n\n\nAPK files are packed using a version of the widespread ZIP archiving algorithm. Most ZIP implementations won't permit two same-named files in one archive, but the algorithm itself doesn't forbid that possibility. So basically, two versions of the classes.dex file are placed inside of the package, the original and a hacked alternative.\n\nWhen checking an app's digital signature, the Android OS looks at the first matching file, but when actually executing and launching the file, it grabs the last one. To Trojanize an app, then, all you need to do is shoehorn your malicious code into it using a name that already exists within the app.\n\nThe flaw is very similar to the first master key vulnerability recently announced by researchers from mobile security firm Bluebox Security. According to BlueBox, 99% of Android devices are vulnerable to this attack. Google has already patched the flaw and posted it to the Android Open Source Project (AOSP).\n\nYou can use ReKey, a free mobile app that's designed to patch the Android master key vulnerability that's present in an estimated 900 million devices that run Android and that could be exploited by attackers to take full control of a device.\n\nAlways get your apps from legitimate sources, always check to make sure the developer name is valid, and configure your phone so it doesn't permit installing apps from unknown sources.\n\n", "Cyber criminals are using popular note-taking app Evernote as Command-and-Control Server to give commands to the malware installed on infected PCs using botnets.\n\nTrendMicro uncovered a malware detected as \"BKDR_VERNOT.A\" tried to communicate with Command-and-Control Server using Evernote.\n\n\nMalware delivered via an executable file that installs the malware as a dynamic-link library. The installer then ties the DLL into a legitimate running process, hiding it from casual detection. Once installed, BKDR_VERNOT.A can perform several backdoor commands such as downloading, executing, and renaming files. It then gathers information from the infected system, including details about its OS, timezone, user name, computer name, registered owner and organization.\nResearchers also pointed out that the backdoor may have also used Evernote as a location to upload stolen data. \"Unfortunately, during our testing, it was not able to login using the credentials embedded in the malware. This is possibly a security measure imposed by Evernote following its recent hacking issue.\"\n\n\"Though this is a clever maneuver to avoid detection, this is not the first time that a legitimate service like Evernote was used as a method of evasion.\"\n\nLike Evernote, Google Docs, Twitter and others have been misused in the past.\n\n"]}, {"source_sentence": "U.S. has the top Security Agencies like NSA, FBI to tackle cyber crime and terrorism with their high profile surveillance technologies, but even after that U.S is proudly hosting 44% of the entire cloud based malware distribution.\n\nWith the enhancement in Internet technology, Cloud computing has shown the possibility of existence and now has become an essential gradient for any Internet Identity.\n\nCloud services are designed in such a way that it is easy to maintain, use, configure and can be scaled depending upon the requirement of the service being provided using the CLOUD technology with cost effective manner.\n\nDue to the Easy and Cost effective alternative of traditional computing, Malware writers are using the big cloud hosting platforms to quickly and effectively serve malware to Internet users, allowing them to bypass detection and geographic blacklisting by serving from a trusted provider.\n\nHiding behind trusted domains and names is not something new. According to recently published SERT Q4 2013 Threat Intelligence Report, the malware distributors are using Cloud Services from Amazon, GoDaddy and Google like a legitimate customer, allowing them to infect millions of computers and vast numbers of enterprise systems.\n\nThe Cloud-based hosting services let malware distributors to avoid the detection because repeatedly changes IP addresses and domain names to avoid detection. Amazon and GoDaddy were identified as the top malware-hosting providers, with a 16 percent and a 14 percent share, respectively.\nMajor Additional findings include:\nUnited States hosts 4.6 times more malware than the next leading country.\n58% of malicious files obtained were identified as HTML files, 26% were directly executable.\nMany malware developers and distributors are utilizing social engineering tactics, including the use of trusted keywords and services, to evade detection and increase potential infection counts.\nA single malicious domain was spread across 20 countries, 67 providers and 199 unique IPs evade detection.\nThe SERT Research team collected a large number of samples from more than 12,000 Registrars, 22,000 ISPs (Internet Service Providers) and tested all malicious packages with more than 40 antivirus engines, output of which is concluded below:\nThe majority of the top malware sites is domains commonly associated with the Potentially Unwanted Applications (PUA), more commonly known as adware, type of malware distributions.\n\"Researchers found that a significant portion of the malware sampled consisted of Microsoft Windows 32-bit Portable Executable (PE32) files being used to distribute pay-per-install applications known as potentially unwanted applications (PUAs).\"\nThe report claimed that these malware is undetectable from over 40 anti-virus engines, that can act as a gateway for exploits and more than half of malware found being distributed by HTML web pages.\n\n", "sentences": ["U.S. has the top Security Agencies like NSA, FBI to tackle cyber crime and terrorism with their high profile surveillance technologies, but even after that U.S is proudly hosting 44% of the entire cloud based malware distribution.\n\nWith the enhancement in Internet technology, Cloud computing has shown the possibility of existence and now has become an essential gradient for any Internet Identity.\n\nCloud services are designed in such a way that it is easy to maintain, use, configure and can be scaled depending upon the requirement of the service being provided using the CLOUD technology with cost effective manner.\n\nDue to the Easy and Cost effective alternative of traditional computing, Malware writers are using the big cloud hosting platforms to quickly and effectively serve malware to Internet users, allowing them to bypass detection and geographic blacklisting by serving from a trusted provider.\n\nHiding behind trusted domains and names is not something new. According to recently published SERT Q4 2013 Threat Intelligence Report, the malware distributors are using Cloud Services from Amazon, GoDaddy and Google like a legitimate customer, allowing them to infect millions of computers and vast numbers of enterprise systems.\n\nThe Cloud-based hosting services let malware distributors to avoid the detection because repeatedly changes IP addresses and domain names to avoid detection. Amazon and GoDaddy were identified as the top malware-hosting providers, with a 16 percent and a 14 percent share, respectively.\nMajor Additional findings include:\nUnited States hosts 4.6 times more malware than the next leading country.\n58% of malicious files obtained were identified as HTML files, 26% were directly executable.\nMany malware developers and distributors are utilizing social engineering tactics, including the use of trusted keywords and services, to evade detection and increase potential infection counts.\nA single malicious domain was spread across 20 countries, 67 providers and 199 unique IPs evade detection.\nThe SERT Research team collected a large number of samples from more than 12,000 Registrars, 22,000 ISPs (Internet Service Providers) and tested all malicious packages with more than 40 antivirus engines, output of which is concluded below:\nThe majority of the top malware sites is domains commonly associated with the Potentially Unwanted Applications (PUA), more commonly known as adware, type of malware distributions.\n\"Researchers found that a significant portion of the malware sampled consisted of Microsoft Windows 32-bit Portable Executable (PE32) files being used to distribute pay-per-install applications known as potentially unwanted applications (PUAs).\"\nThe report claimed that these malware is undetectable from over 40 anti-virus engines, that can act as a gateway for exploits and more than half of malware found being distributed by HTML web pages.\n\n", "Windows 8 will be challenge for Malware writers\nMicrosoft™s security researcher believe that upcoming operating system, Windows 8 is a step forward in security and Windows 8 will be far better at protecting against malware than it's predecessors.\n\nChris Valasek, a senior security research scientist at development testing firm Coverity, began examining the security features of Windows 8 last autumn, before the consumer previews of the upcoming revamp of the new Microsoft OS came out.\n\n\"There are always going to be vulnerabilities but you can make it difficult to leverage vulnerabilities to write exploits.\" One major change between Windows 7 and 8 is the addition of more exploit-mitigation technologies, however. Windows Memory Managers (specifically the Windows Heap Manager and Windows Kernel Pool Allocator) are designed to make it far harder for attackers to exploit buffer-overflow vulnerabilities and the like to push malware onto vulnerable systems.\n\nThe \"security sandbox\" for applications for Windows 8 will also be a great step forward. \"These new Windows 8 Apps will be contained by a much more restrictive security sandbox, which is a mechanism to prevent programs from performing certain actions,\" Valasek explains.\n\n\"This new App Container provides the operating system with a way to make more fine-grained decisions on what actions certain applications can perform, instead of relying on the more broad 'Integrity Levels' that debuted in Windows Vista/7.\n\nWindows 8 also comes with a new version of Internet Explorer, Microsoft's browser software. Internet Explorer 10 will come with a mode that disables support for third-party plug-ins such as Flash and Java.\n\n", "Ransomware, a threat to internet users that continues to grow in popularity with cyber criminals due to its success and monetary potential. This is nothing new and to be expected. I have noticed many discussions on underground hacking forums about \"How to create Ransomware like Cryptolocker malware\" or \"Malware - hacking tool-kit with ransomware features\".\nSecurity intelligence provider, IntelCrawler has discovered a new ransomware variant called Locker that demands $150 (£92) to restore files that it has encrypted.\n\nLike Cryptolocker, this new ransomware is also nasty because infected users are in danger of losing their personal files forever.\n\nLocker mainly spreads by drive-by downloads from compromised websites, disguised itself as MP3 files and use system software vulnerabilities to infect the end user.\n\nOnce it has infected a system, malware first checks the infected machine has an internet connection or not. Then it deletes any original files from the victim's computer after using AES-CTR for encrypting the files on infected devices and add \". perfect\" extension to them.\n\nLocker's encryption is based on an open source tool called 'TurboPower LockBox' library. After encrypting all files, the malware place a \"CONTACT.TXT\" file in each directory, which provides contact details of the author to buy the decryption key and once the ransom is paid, each victim gets a key to unscramble files.\n\nThe good news is that the researchers are working on the universal decryption software in order to help the victims. \"It appears that the hackers are simply comparing the list of infected IP addresses of users, along with their host names,\" according IntelCrawler.\n\nIntelCrawler had discovered 50 different builds of the malware, which are being sold in underground markets for pay-per install programs. One builds had just under 6,000 infected machines. ZdNet reported.\n\nMalware will encrypt all drives visible on an infected system, so you must be sure that your backups are stored remotely or in a location that is not simply another drive partition or mapping to another location.\n\nThe malware infects users from the United States, Turkey, Russia, Germany and the Netherlands. Users should remain vigilant about their security. Please double check the legitimacy of links received in emails and ensure you have your antivirus up to date to help protect against such threats.\n\n"]}, {"source_sentence": "Security Event : Hack In Paris (16-17 June, 2011)\n\n\nHack In Paris is an international and corporate security event that will take place in Disneyland Paris® fromJune 16th to 17th of 2011. Please refer to the homepage to get up-to-date information about the event.\n\nTopics\nThe following list contains major topics the conference will cover. Please consider submitting even if the subject of your research is not listed here.\nAdvances in reverse engineering\nVulnerability research and exploitation\nPenetration testing and security assessment\nMalware analysis and new trends in malicous codes\nForensics, IT crime & law enforcement\nPrivacy issues: LOPPSI, HADOPI, …\nLow-level hacking (console security & mobile devices)\nRisk management and ISO 27001\nDates\nJanuary 20: CFP announced\nMarch 30: Submission deadline\nApril 15: Notification sent to authors\nApril 17: Program announcement\nJune 16-17: Hack In Paris\nJune 18: Nuit du Hack\nMore Information: https://hackinparis.com\n\n", "sentences": ["It's just two weeks into the Trump presidency, but his decisions have caused utter chaos around the country.\n\nOne such order signed by the president was banning both refugees and visa holders from seven Muslim-majority countries (Iraq, Iran, Libya, Yemen, Somalia, Syria, and Sudan) from entering the United States, resulting in unexpectedly arrest of some travelers at airports.\n\nNow, it seems like some anti-Trump protesters have publically declared their fight against the president by exploiting a known flaw in low power FM (LPFM) radio transmitters to play a song the radio stations didn't intend to broadcast.\n\nRadio stations in South Carolina, Indiana, Texas, Tennessee and Kentucky, were hacked recently to broadcast the Bompton-based rapper YG and Nipsey Hussle's anti-Trump song \"Fuck Donald Trump,\" which was already a radio hit in some parts of the country last year, several sources report.\n\nThe song was repeatedly played on Monday night, according to the RadioInsight, and the news of the incident began emerging shortly after Trump's inauguration on January 20, eight days before hackers hacked 70 percent of the police CCTV cameras in Washington DC.\n\nHackers gained access to the radio stations by exploiting known vulnerabilities in Barix Exstreamer devices which can decode audio file formats and send them along for LPFM transmission.\n\nOver a dozen radio stations experienced the hack in recent weeks, though some of them shut down their airwaves as quickly as possible in an attempt to avoid playing the inflammatory \"FDT (Fuck Donald Trump)\" song on loop.\n\nThe hackers or group of hackers behind the cyber attack is still unknown. The affected stations so far include:\n105.9 WFBS-LP Salem, S.C.\nRadio 810 WMGC/96.7 W244CW Murfreesboro TN\n101.9 Pirate Seattle\n100.9 WCHQ-LP Louisville\n100.5 KCGF-LP San Angelo TX\nHowever, there are unconfirmed reports from radio stations in California, Indiana, and Washington State that are believed to be affected as well.\n\nHas any of the radio stations you listen to been hit by the hackers? Let us know in the comments!\n\n", "Google is going to shut down its social media network Google+ after the company suffered a massive data breach that exposed the private data of hundreds of thousands of Google Plus users to third-party developers.\n\nAccording to the tech giant, a security vulnerability in one of Google+'s People APIs allowed third-party developers to access data for more than 500,000 users, including their usernames, email addresses, occupation, date of birth, profile photos, and gender-related information.\n\nSince Google+ servers do not keep API logs for more than two weeks, the company cannot confirm the number of users impacted by the vulnerability.\n\nHowever, Google assured its users that the company found no evidence that any developer was aware of this bug, or that the profile data was misused by any of the 438 developers that could have had access.\n\"However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API,\" Google said in blog post published today.\nThe vulnerability was open since 2015 and fixed after Google discovered it in March 2018, but the company chose not to disclose the breach to the public—at the time when Facebook was being roasted for Cambridge Analytica scandal.\n\nThough Google has not revealed the technical details of the security vulnerability, the nature of the flaw seems to be something very similar to Facebook API flaw that recently allowed unauthorized developers to access private data from Facebook users.\n\nBesides admitting the security breach, Google also announced that the company is shutting down its social media network, acknowledging that Google+ failed to gain broad adoption or significant traction with consumers.\n\"The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds,\" Google said.\nIn response, the company has decided to shut down Google+ for consumers by the end of August 2019. However, Google+ will continue as a product for Enterprise users.\n\nGoogle Introduces New Privacy Controls Over Third-Party App Permissions\n\nAs part of its \"Project Strobe,\" Google engineers also reviewed third-party developer access to Google account and Android device data; and has accordingly now introduced some new privacy controls.\n\nWhen a third-party app prompts users for access to their Google account data, clicking \"Allow\" button approves all requested permissions at once, leaving an opportunity for malicious apps to trick users into giving away powerful permissions.\nBut now Google has updated its Account Permissions system that asks for each requested permission individually rather than all at once, giving users more control over what type of account data they choose to share with each app.\n\nSince APIs can also allow developers to access users' extremely sensitive data, like that of Gmail account, Google has limited access to Gmail API only for apps that directly enhance email functionality—such as email clients, email backup services and productivity services.\n\nGoogle shares fell over 2 percent to $1134.23 after the data breach reports.\n\n", "Security Event : Hack In Paris (16-17 June, 2011)\n\n\nHack In Paris is an international and corporate security event that will take place in Disneyland Paris® fromJune 16th to 17th of 2011. Please refer to the homepage to get up-to-date information about the event.\n\nTopics\nThe following list contains major topics the conference will cover. Please consider submitting even if the subject of your research is not listed here.\nAdvances in reverse engineering\nVulnerability research and exploitation\nPenetration testing and security assessment\nMalware analysis and new trends in malicous codes\nForensics, IT crime & law enforcement\nPrivacy issues: LOPPSI, HADOPI, …\nLow-level hacking (console security & mobile devices)\nRisk management and ISO 27001\nDates\nJanuary 20: CFP announced\nMarch 30: Submission deadline\nApril 15: Notification sent to authors\nApril 17: Program announcement\nJune 16-17: Hack In Paris\nJune 18: Nuit du Hack\nMore Information: https://hackinparis.com\n\n"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,525 |
GPT4All-Community/gemma-2-9b-it-GGUF
|
GPT4All-Community
|
text-generation
|
[
"transformers",
"gguf",
"text-generation-inference",
"GGUF",
"GPT4All-community",
"GPT4All",
"conversational",
"text-generation",
"en",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"region:us"
] | 2024-08-04T15:25:23Z |
2024-08-20T14:49:53+00:00
| 1,027 | 0 |
---
base_model: google/gemma-2-9b-it
language:
- en
library_name: transformers
license: gemma
license_link: https://ai.google.dev/gemma/terms
model_name: Gemma-2-9b-it
pipeline_tag: text-generation
tags:
- text-generation-inference
- transformers
- GGUF
- GPT4All-community
- GPT4All
- conversational
inference: false
model_creator: Google
model_type: gemma2
quantized_by: ThiloteE
---
> [!NOTE]
> This is a model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk.
# About
<!-- ### quantize_version: 3 -->
<!-- ### convert_type: hf -->
- Static quants of https://huggingface.co/google/gemma-2-9b-it at commit [1937c70](https://huggingface.co/google/gemma-2-9b-it/commit/1937c70277fcc5f7fb0fc772fc5bc69378996e71)
- Quantized by [ThiloteE](https://huggingface.co/ThiloteE) with llama.cpp commit [e09a800](https://github.com/ggerganov/llama.cpp/commit/e09a800f9a9b19c73aa78e03b4c4be8ed988f3e6)
These quants were created with a customized configuration that have been proven to not cause visible end of string (eos) tokens during inference with [GPT4All](https://www.nomic.ai/gpt4all).
The config.json, generation_config.json and tokenizer_config.json differ from the original configuration as can be found in the original model's repository at the time of creation of these quants.
# Prompt Template (for GPT4All)
This model does not have a system prompt by default.
Chat Template:
```
<start_of_turn>user
%1<end_of_turn>
<start_of_turn>model
%2<end_of_turn>
```
# Context Length
`8192`
Use a lower value during inference, if you do not have enough RAM or VRAM.
# Provided Quants
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/GPT4All-Community/gemma-2-9b-it-GGUF/resolve/main/gemma-2-9b-it-Q4_0.gguf) | Q4_0 | 5.44 | fast, recommended |
| [GGUF](https://huggingface.co/GPT4All-Community/gemma-2-9b-it-GGUF/resolve/main/gemma-2-9b-it-f16.gguf) | f16 | 18.5 | 16 bpw, overkill |
# About GGUF
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) for
more details, including on how to concatenate multi-part files.
Here is a handy graph by ikawrakow comparing some quant types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
# Thanks
I thank Mradermacher and TheBloke for Inspiration to this model card and their contributions to open source. Also 3Simplex for lots of help along the way.
Shoutout to the GPT4All and llama.cpp communities :-)
<br>
<br>
<br>
<br>
------
<!-- footer end -->
<!-- original-model-card start -->
# Original Model card:
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
<!-- original-model-card end -->
<!-- end -->
| null |
Non_BioNLP
|
> [!NOTE]
> This is a model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk.
# About
<!-- ### quantize_version: 3 -->
<!-- ### convert_type: hf -->
- Static quants of https://huggingface.co/google/gemma-2-9b-it at commit [1937c70](https://huggingface.co/google/gemma-2-9b-it/commit/1937c70277fcc5f7fb0fc772fc5bc69378996e71)
- Quantized by [ThiloteE](https://huggingface.co/ThiloteE) with llama.cpp commit [e09a800](https://github.com/ggerganov/llama.cpp/commit/e09a800f9a9b19c73aa78e03b4c4be8ed988f3e6)
These quants were created with a customized configuration that have been proven to not cause visible end of string (eos) tokens during inference with [GPT4All](https://www.nomic.ai/gpt4all).
The config.json, generation_config.json and tokenizer_config.json differ from the original configuration as can be found in the original model's repository at the time of creation of these quants.
# Prompt Template (for GPT4All)
This model does not have a system prompt by default.
Chat Template:
```
<start_of_turn>user
%1<end_of_turn>
<start_of_turn>model
%2<end_of_turn>
```
# Context Length
`8192`
Use a lower value during inference, if you do not have enough RAM or VRAM.
# Provided Quants
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/GPT4All-Community/gemma-2-9b-it-GGUF/resolve/main/gemma-2-9b-it-Q4_0.gguf) | Q4_0 | 5.44 | fast, recommended |
| [GGUF](https://huggingface.co/GPT4All-Community/gemma-2-9b-it-GGUF/resolve/main/gemma-2-9b-it-f16.gguf) | f16 | 18.5 | 16 bpw, overkill |
# About GGUF
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) for
more details, including on how to concatenate multi-part files.
Here is a handy graph by ikawrakow comparing some quant types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
# Thanks
I thank Mradermacher and TheBloke for Inspiration to this model card and their contributions to open source. Also 3Simplex for lots of help along the way.
Shoutout to the GPT4All and llama.cpp communities :-)
<br>
<br>
<br>
<br>
------
<!-- footer end -->
<!-- original-model-card start -->
# Original Model card:
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
<!-- original-model-card end -->
<!-- end -->
|
{"base_model": "google/gemma-2-9b-it", "language": ["en"], "library_name": "transformers", "license": "gemma", "license_link": "https://ai.google.dev/gemma/terms", "model_name": "Gemma-2-9b-it", "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "transformers", "GGUF", "GPT4All-community", "GPT4All", "conversational"], "inference": false, "model_creator": "Google", "model_type": "gemma2", "quantized_by": "ThiloteE"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 43,526 |
amorfati/custom-emb-model
|
amorfati
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:942",
"loss:CoSENTLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"base_model:microsoft/mpnet-base",
"base_model:finetune:microsoft/mpnet-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-15T12:54:28Z |
2024-07-15T12:54:47+00:00
| 57 | 0 |
---
base_model: microsoft/mpnet-base
datasets:
- sentence-transformers/all-nli
language:
- en
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:942
- loss:CoSENTLoss
widget:
- source_sentence: The entire city was surrounded by open countryside with a scattering
of small villages.
sentences:
- Let's leave it.
- It was proven that Mrs. Vandemeyer and the girl were hiding something.
- There is only one large village in the countryside.
- source_sentence: or just get out and walk uh or even jog a little although i don't
do that regularly but Washington's a great place to do that
sentences:
- '"Washington''s is a great place for a walk or a jog."'
- A man has some bananas.
- The sidewalk was deserted.
- source_sentence: A woman walks by a brick building that's covered with graffiti.
sentences:
- When I was in high school, my favorite author was Virginia Wolf.
- A woman is outside.
- A man in a photo booth at a carnival.
- source_sentence: A woman swinging a tennis racket on an outdoor court.
sentences:
- A woman walking on an old bridge near a mountain.
- A woman is playing basketball at the park.
- Yanomamo eats food.
- source_sentence: Several people with parachutes are overlooking a beautiful view
of fields and hills.
sentences:
- Your little girl wrote about how well your farewell activity went.
- The Crosethe Rue De Rivoli was built for Cardinal Richelieu to live in.
- Several people mow the grass.
---
# SentenceTransformer based on microsoft/mpnet-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("amorfati/custom-emb-model")
# Run inference
sentences = [
'Several people with parachutes are overlooking a beautiful view of fields and hills.',
'Several people mow the grass.',
'Your little girl wrote about how well your farewell activity went.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### sentence-transformers/all-nli
* Dataset: [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 942 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 21.24 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.16 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.51</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------|:---------------------------------------|:-----------------|
| <code>somehow, somewhere.</code> | <code>Someplace, in some way.</code> | <code>1.0</code> |
| <code>A boy is sitting on a boat with two flags.</code> | <code>A blonde person sitting.</code> | <code>0.5</code> |
| <code>A asian male suit clad, uses a umbrella to shield himself from the rain.</code> | <code>He is late for a meeting.</code> | <code>0.5</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### sentence-transformers/all-nli
* Dataset: [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 120 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 21.39 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.47 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.58</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------|:-----------------|
| <code>When we leave the house we shall be followed again, but not molested, FOR IT IS Mr. BROWN'S PLAN THAT WE ARE TO LEAD HIM.</code> | <code>Mr. Brown has made a plan for us to lead him.</code> | <code>1.0</code> |
| <code>She hates me."</code> | <code>She loves me.</code> | <code>0.0</code> |
| <code>That, too, was locked or bolted on the inside.</code> | <code>She didn't want anyone to enter the room.</code> | <code>0.5</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on microsoft/mpnet-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("amorfati/custom-emb-model")
# Run inference
sentences = [
'Several people with parachutes are overlooking a beautiful view of fields and hills.',
'Several people mow the grass.',
'Your little girl wrote about how well your farewell activity went.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### sentence-transformers/all-nli
* Dataset: [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 942 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 21.24 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.16 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.51</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------|:---------------------------------------|:-----------------|
| <code>somehow, somewhere.</code> | <code>Someplace, in some way.</code> | <code>1.0</code> |
| <code>A boy is sitting on a boat with two flags.</code> | <code>A blonde person sitting.</code> | <code>0.5</code> |
| <code>A asian male suit clad, uses a umbrella to shield himself from the rain.</code> | <code>He is late for a meeting.</code> | <code>0.5</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### sentence-transformers/all-nli
* Dataset: [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 120 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 21.39 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.47 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.58</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------|:-----------------|
| <code>When we leave the house we shall be followed again, but not molested, FOR IT IS Mr. BROWN'S PLAN THAT WE ARE TO LEAD HIM.</code> | <code>Mr. Brown has made a plan for us to lead him.</code> | <code>1.0</code> |
| <code>She hates me."</code> | <code>She loves me.</code> | <code>0.0</code> |
| <code>That, too, was locked or bolted on the inside.</code> | <code>She didn't want anyone to enter the room.</code> | <code>0.5</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "microsoft/mpnet-base", "datasets": ["sentence-transformers/all-nli"], "language": ["en"], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:942", "loss:CoSENTLoss"], "widget": [{"source_sentence": "The entire city was surrounded by open countryside with a scattering of small villages.", "sentences": ["Let's leave it.", "It was proven that Mrs. Vandemeyer and the girl were hiding something.", "There is only one large village in the countryside."]}, {"source_sentence": "or just get out and walk uh or even jog a little although i don't do that regularly but Washington's a great place to do that", "sentences": ["\"Washington's is a great place for a walk or a jog.\"", "A man has some bananas.", "The sidewalk was deserted."]}, {"source_sentence": "A woman walks by a brick building that's covered with graffiti.", "sentences": ["When I was in high school, my favorite author was Virginia Wolf.", "A woman is outside.", "A man in a photo booth at a carnival."]}, {"source_sentence": "A woman swinging a tennis racket on an outdoor court.", "sentences": ["A woman walking on an old bridge near a mountain.", "A woman is playing basketball at the park.", "Yanomamo eats food."]}, {"source_sentence": "Several people with parachutes are overlooking a beautiful view of fields and hills.", "sentences": ["Your little girl wrote about how well your farewell activity went.", "The Crosethe Rue De Rivoli was built for Cardinal Richelieu to live in.", "Several people mow the grass."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,527 |
VMware/tinyroberta-quantized-mrqa
|
VMware
|
question-answering
|
[
"transformers",
"onnx",
"roberta",
"question-answering",
"en",
"dataset:mrqa",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2023-02-17T21:42:03Z |
2023-02-24T19:05:58+00:00
| 458 | 1 |
---
datasets:
- mrqa
language:
- en
license: apache-2.0
metrics:
- exact_match
- f1
model-index:
- name: VMware/tinyroberta-mrqa
results:
- task:
type: Extractive Question-Answering
dataset:
name: mrqa
type: mrqa
metrics:
- type: exact_match
value: 69.21
name: Eval EM
- type: f1
value: 79.65
name: Eval F1
- type: exact_match
value: 52.8
name: Test EM
- type: f1
value: 63.4
name: Test F1
---
# VMware/TinyRoBERTa-quantized-mrqa
Int-8 dynamic quantized version of [VMware/tinyroberta-mrqa](https://huggingface.co/VMware/tinyroberta-mrqa).
## Overview
- **Model name:** tinyroberta-quantized-mrqa
- **Model type:** Extractive Question Answering
- **Teacher Model:** [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa)
- **Full Precision Model:** [VMware/tinyroberta-mrqa](https://huggingface.co/VMware/tinyroberta-mrqa)
- **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering)
- **Training data size:** 516,819 examples
- **Language:** English
- **Framework:** ONNX
- **Model version:** 1.0
## Usage
### In Transformers
```python
from optimum.onnxruntime import ORTModelForQuestionAnswering
from transformers import pipeline, AutoTokenizer
model_name = 'VMware/tinyroberta-quantized-mrqa'
tokenizer = AutoTokenizer.from_pretrained(model_name)
quantized_model = ORTModelForQuestionAnswering.from_pretrained(model_name, file_name="model_quantize.onnx")
qa_model = pipeline('question-answering', model=quantized_model, tokenizer=tokenizer)
QA_input = {
context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT."
question = "What is MRQA?"
}
qa_answers = qa_model(qa_input)
```
# Limitations and Bias
The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
- Language: The model is designed to work with English text only and may not perform as well on other languages.
- Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge.
- Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets.
In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
| null |
Non_BioNLP
|
# VMware/TinyRoBERTa-quantized-mrqa
Int-8 dynamic quantized version of [VMware/tinyroberta-mrqa](https://huggingface.co/VMware/tinyroberta-mrqa).
## Overview
- **Model name:** tinyroberta-quantized-mrqa
- **Model type:** Extractive Question Answering
- **Teacher Model:** [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa)
- **Full Precision Model:** [VMware/tinyroberta-mrqa](https://huggingface.co/VMware/tinyroberta-mrqa)
- **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering)
- **Training data size:** 516,819 examples
- **Language:** English
- **Framework:** ONNX
- **Model version:** 1.0
## Usage
### In Transformers
```python
from optimum.onnxruntime import ORTModelForQuestionAnswering
from transformers import pipeline, AutoTokenizer
model_name = 'VMware/tinyroberta-quantized-mrqa'
tokenizer = AutoTokenizer.from_pretrained(model_name)
quantized_model = ORTModelForQuestionAnswering.from_pretrained(model_name, file_name="model_quantize.onnx")
qa_model = pipeline('question-answering', model=quantized_model, tokenizer=tokenizer)
QA_input = {
context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT."
question = "What is MRQA?"
}
qa_answers = qa_model(qa_input)
```
# Limitations and Bias
The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
- Language: The model is designed to work with English text only and may not perform as well on other languages.
- Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge.
- Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets.
In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
|
{"datasets": ["mrqa"], "language": ["en"], "license": "apache-2.0", "metrics": ["exact_match", "f1"], "model-index": [{"name": "VMware/tinyroberta-mrqa", "results": [{"task": {"type": "Extractive Question-Answering"}, "dataset": {"name": "mrqa", "type": "mrqa"}, "metrics": [{"type": "exact_match", "value": 69.21, "name": "Eval EM"}, {"type": "f1", "value": 79.65, "name": "Eval F1"}, {"type": "exact_match", "value": 52.8, "name": "Test EM"}, {"type": "f1", "value": 63.4, "name": "Test F1"}]}]}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 43,528 |
edangx100/phrasebank-sentiment-analysis
|
edangx100
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-27T06:55:29Z |
2023-10-27T06:56:02+00:00
| 96 | 0 |
---
base_model: bert-base-uncased
datasets:
- financial_phrasebank
license: apache-2.0
metrics:
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: phrasebank-sentiment-analysis
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_50agree
split: train
args: sentences_50agree
metrics:
- type: f1
value: 0.8528839892771511
name: F1
- type: accuracy
value: 0.8693259972489684
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phrasebank-sentiment-analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4498
- F1: 0.8529
- Accuracy: 0.8693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.6174 | 0.94 | 100 | 0.5004 | 0.7752 | 0.7964 |
| 0.3075 | 1.89 | 200 | 0.3373 | 0.8456 | 0.8693 |
| 0.153 | 2.83 | 300 | 0.4317 | 0.8444 | 0.8618 |
| 0.0951 | 3.77 | 400 | 0.4498 | 0.8529 | 0.8693 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phrasebank-sentiment-analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4498
- F1: 0.8529
- Accuracy: 0.8693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.6174 | 0.94 | 100 | 0.5004 | 0.7752 | 0.7964 |
| 0.3075 | 1.89 | 200 | 0.3373 | 0.8456 | 0.8693 |
| 0.153 | 2.83 | 300 | 0.4317 | 0.8444 | 0.8618 |
| 0.0951 | 3.77 | 400 | 0.4498 | 0.8529 | 0.8693 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
{"base_model": "bert-base-uncased", "datasets": ["financial_phrasebank"], "license": "apache-2.0", "metrics": ["f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "phrasebank-sentiment-analysis", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "financial_phrasebank", "type": "financial_phrasebank", "config": "sentences_50agree", "split": "train", "args": "sentences_50agree"}, "metrics": [{"type": "f1", "value": 0.8528839892771511, "name": "F1"}, {"type": "accuracy", "value": 0.8693259972489684, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,529 |
gmenchetti/bart-large-mnli-empathy-tuned
|
gmenchetti
|
text-classification
|
[
"setfit",
"safetensors",
"bart",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:facebook/bart-large-mnli",
"base_model:finetune:facebook/bart-large-mnli",
"region:us"
] | 2024-04-03T23:56:32Z |
2024-04-03T23:58:15+00:00
| 9 | 0 |
---
base_model: facebook/bart-large-mnli
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget: []
inference: true
---
# SetFit with facebook/bart-large-mnli
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 1024 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("gmenchetti/bart-large-mnli-empathy-tuned")
# Run inference
preds = model("I loved the spiderman movie!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.6.1
- Transformers: 4.39.3
- PyTorch: 2.0.0.post200
- Datasets: 2.16.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with facebook/bart-large-mnli
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 1024 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("gmenchetti/bart-large-mnli-empathy-tuned")
# Run inference
preds = model("I loved the spiderman movie!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.6.1
- Transformers: 4.39.3
- PyTorch: 2.0.0.post200
- Datasets: 2.16.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "facebook/bart-large-mnli", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [], "inference": true}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,530 |
huoxu/test-bge-m3-bert
|
huoxu
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-25T13:52:50Z |
2024-07-25T22:56:45+00:00
| 9 | 0 |
---
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
widget: []
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.2.0
- Accelerate: 0.27.2
- Datasets: 2.17.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.2.0
- Accelerate: 0.27.2
- Datasets: 2.17.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction"], "widget": []}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,532 |
TransferGraph/bert-large-uncased-finetuned-lora-tweet_eval_hate
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:google-bert/bert-large-uncased",
"base_model:adapter:google-bert/bert-large-uncased",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-29T13:52:13Z |
2024-02-29T13:52:15+00:00
| 0 | 0 |
---
base_model: bert-large-uncased
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: bert-large-uncased-finetuned-lora-tweet_eval_hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: validation
args: hate
metrics:
- type: accuracy
value: 0.754
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.572 | None | 0 |
| 0.684 | 0.6226 | 0 |
| 0.726 | 0.5198 | 1 |
| 0.738 | 0.4748 | 2 |
| 0.754 | 0.4592 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.572 | None | 0 |
| 0.684 | 0.6226 | 0 |
| 0.726 | 0.5198 | 1 |
| 0.738 | 0.4748 | 2 |
| 0.754 | 0.4592 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "bert-large-uncased", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "bert-large-uncased-finetuned-lora-tweet_eval_hate", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "hate", "split": "validation", "args": "hate"}, "metrics": [{"type": "accuracy", "value": 0.754, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,535 |
clayygodd/distilbert-base-uncased-finetuned-clinc
|
clayygodd
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-27T03:32:14Z |
2023-04-27T05:54:06+00:00
| 12 | 0 |
---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.9180645161290323
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9180645161290323, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,537 |
TheBloke/Redmond-Puffin-13B-GPTQ
|
TheBloke
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"sft",
"eng",
"dataset:LDJnr/Puffin",
"base_model:NousResearch/Redmond-Puffin-13B",
"base_model:quantized:NousResearch/Redmond-Puffin-13B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | 2023-07-19T10:29:56Z |
2023-09-27T12:44:50+00:00
| 200 | 27 |
---
base_model: NousResearch/Redmond-Puffin-13B
datasets:
- LDJnr/Puffin
language:
- eng
license:
- mit
model_name: Redmond Puffin 13B V1.3
tags:
- llama-2
- sft
inference: false
model_creator: NousResearch
model_type: llama
prompt_template: '### human: {prompt}
### response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Redmond Puffin 13B V1.3 - GPTQ
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
- Original model: [Redmond Puffin 13B V1.3](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
<!-- description start -->
## Description
This repo contains GPTQ model files for [NousResearch's Redmond Puffin 13B V1.3](https://huggingface.co/NousResearch/Redmond-Puffin-13B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGUF)
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Human-Response2
```
### human: {prompt}
### response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `['mit']`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NousResearch's Redmond Puffin 13B V1.3](https://huggingface.co/NousResearch/Redmond-Puffin-13B).
<!-- licensing end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/main) | 4 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.95 GB | No | 8-bit, with group size 64g and Act Order for even higher inference quality. Poor AutoGPTQ CUDA speed. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Redmond-Puffin-13B-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Redmond-Puffin-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Redmond-Puffin-13B-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Redmond-Puffin-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Redmond-Puffin-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''### human: {prompt}
### response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NousResearch's Redmond Puffin 13B V1.3
## **Redmond-Puffin-13b-V1.3**
**The first commercially available language model released by Nous Research!**
Redmond-Puffin-13B is likely the worlds first llama-2 based, fine-tuned language models, leveraging a hand curated set of 3K high quality examples, many of which take full advantage of the 4096 context length of Llama 2. This model was fine-tuned by Nous Research, with LDJ leading the training and dataset curation, along with significant dataset formation contributions by J-Supha.
Special thank you to Redmond AI for sponsoring the compute.
Special thank you to Emozilla for assisting with training experimentations and many issues encountered during training.
Notable mentions for assisting in some of the training issues goes to: Caseus and Teknium.
## Model Training
Redmond-Puffin 13B-V1.3 is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
## Prompt Format
The reccomended model usage is:
```
### human:
### response:
```
Optional reccomended pre-prompt / system prompt:
```
### human: Interact in conversation to the best of your ability, please be concise, logical, intelligent and coherent.
### response: Sure! sounds good.
```
## When should I use Puffin or Hermes 2?
Puffin and Hermes-2 both beat previous SOTA for GPT4ALL benchmarks, with Hermes-2 winning by a 0.1% margin over Puffin.
- Hermes 2 is trained on purely single turn instruction examples.
- Puffin is trained mostly on multi-turn, long context, highly curated and cleaned GPT-4 conversations with real humans, as well as curated single-turn examples relating to Physics, Bio, Math and Chem.
For these reasons, it's reccomended to give Puffin a try if you want to have multi-turn conversations and/or long context communication.
## Example Outputs!:





## Notable Features:
- The first Llama-2 based fine-tuned model released by Nous Research.
- Ability to recall information upto 2023 without internet (ChatGPT cut off date is in 2021)
- Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
- The first commercially available language model released by Nous Research.
## Current Limitations
Some token mismatch problems and formatting issues have been idenitifed, these may very possibly effect the current output quality.
We plan to have these solved in an updated Puffin model in the very near future, please stay tuned!
## Future Plans
This is a relatively early build amongst the grand plans for the future of Puffin!
Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
## Benchmarks!
As of Puffins release, it achieves a new SOTA for the GPT4All benchmarks! Supplanting Hermes for the #1 position!
(Rounded to nearest tenth)
Previous Sota: Hermes - 68.8
New Sota: Puffin - 69.9 (+1.1)
note: After release, Puffin has since had its average GPT4All score beaten by 0.1%, by Nous' very own Model Hermes-2!
Latest SOTA w/ Hermes 2- 70.0 (+0.1 over Puffins 69.9 score)
That being said, Puffin supplants Hermes-2 for the #1 spot in Arc-E, HellaSwag and Winogrande!
Puffin also perfectly ties with Hermes in PIQA, however Hermes-2 still excels in much of Big Bench and AGIEval, so it's highly reccomended you give it a try as well!
GPT4all :
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4983|± |0.0146|
| | |acc_norm|0.5068|± |0.0146|
|arc_easy | 0|acc |0.7980|± |0.0082|
| | |acc_norm|0.7757|± |0.0086|
|boolq | 1|acc |0.8150|± |0.0068|
|hellaswag | 0|acc |0.6132|± |0.0049|
| | |acc_norm|0.8043|± |0.0040|
|openbookqa | 0|acc |0.3560|± |0.0214|
| | |acc_norm|0.4560|± |0.0223|
|piqa | 0|acc |0.7954|± |0.0094|
| | |acc_norm|0.8069|± |0.0092|
|winogrande | 0|acc |0.7245|± |0.0126|
```
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5368|± |0.0363|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7127|± |0.0236|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3023|± |0.0286|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1003|± |0.0159|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2520|± |0.0194|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.1743|± |0.0143|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4200|± |0.0285|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2900|± |0.0203|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5430|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4442|± |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2074|± |0.0128|
|bigbench_snarks | 0|multiple_choice_grade|0.5083|± |0.0373|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.4970|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3260|± |0.0148|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2136|± |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1326|± |0.0081|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4200|± |0.0285|
```
AGI Eval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2283|± |0.0264|
| | |acc_norm|0.2244|± |0.0262|
|agieval_logiqa_en | 0|acc |0.2780|± |0.0176|
| | |acc_norm|0.3164|± |0.0182|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2043|± |0.0266|
|agieval_lsat_lr | 0|acc |0.3392|± |0.0210|
| | |acc_norm|0.2961|± |0.0202|
|agieval_lsat_rc | 0|acc |0.4387|± |0.0303|
| | |acc_norm|0.3569|± |0.0293|
|agieval_sat_en | 0|acc |0.5874|± |0.0344|
| | |acc_norm|0.5194|± |0.0349|
|agieval_sat_en_without_passage| 0|acc |0.4223|± |0.0345|
| | |acc_norm|0.3447|± |0.0332|
|agieval_sat_math | 0|acc |0.3364|± |0.0319|
| | |acc_norm|0.2773|± |0.0302|
```
| null |
Non_BioNLP
|
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Redmond Puffin 13B V1.3 - GPTQ
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
- Original model: [Redmond Puffin 13B V1.3](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
<!-- description start -->
## Description
This repo contains GPTQ model files for [NousResearch's Redmond Puffin 13B V1.3](https://huggingface.co/NousResearch/Redmond-Puffin-13B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGUF)
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Human-Response2
```
### human: {prompt}
### response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `['mit']`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NousResearch's Redmond Puffin 13B V1.3](https://huggingface.co/NousResearch/Redmond-Puffin-13B).
<!-- licensing end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/main) | 4 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.95 GB | No | 8-bit, with group size 64g and Act Order for even higher inference quality. Poor AutoGPTQ CUDA speed. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Redmond-Puffin-13B-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Redmond-Puffin-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Redmond-Puffin-13B-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Redmond-Puffin-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Redmond-Puffin-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''### human: {prompt}
### response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NousResearch's Redmond Puffin 13B V1.3
## **Redmond-Puffin-13b-V1.3**
**The first commercially available language model released by Nous Research!**
Redmond-Puffin-13B is likely the worlds first llama-2 based, fine-tuned language models, leveraging a hand curated set of 3K high quality examples, many of which take full advantage of the 4096 context length of Llama 2. This model was fine-tuned by Nous Research, with LDJ leading the training and dataset curation, along with significant dataset formation contributions by J-Supha.
Special thank you to Redmond AI for sponsoring the compute.
Special thank you to Emozilla for assisting with training experimentations and many issues encountered during training.
Notable mentions for assisting in some of the training issues goes to: Caseus and Teknium.
## Model Training
Redmond-Puffin 13B-V1.3 is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
## Prompt Format
The reccomended model usage is:
```
### human:
### response:
```
Optional reccomended pre-prompt / system prompt:
```
### human: Interact in conversation to the best of your ability, please be concise, logical, intelligent and coherent.
### response: Sure! sounds good.
```
## When should I use Puffin or Hermes 2?
Puffin and Hermes-2 both beat previous SOTA for GPT4ALL benchmarks, with Hermes-2 winning by a 0.1% margin over Puffin.
- Hermes 2 is trained on purely single turn instruction examples.
- Puffin is trained mostly on multi-turn, long context, highly curated and cleaned GPT-4 conversations with real humans, as well as curated single-turn examples relating to Physics, Bio, Math and Chem.
For these reasons, it's reccomended to give Puffin a try if you want to have multi-turn conversations and/or long context communication.
## Example Outputs!:





## Notable Features:
- The first Llama-2 based fine-tuned model released by Nous Research.
- Ability to recall information upto 2023 without internet (ChatGPT cut off date is in 2021)
- Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
- The first commercially available language model released by Nous Research.
## Current Limitations
Some token mismatch problems and formatting issues have been idenitifed, these may very possibly effect the current output quality.
We plan to have these solved in an updated Puffin model in the very near future, please stay tuned!
## Future Plans
This is a relatively early build amongst the grand plans for the future of Puffin!
Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
## Benchmarks!
As of Puffins release, it achieves a new SOTA for the GPT4All benchmarks! Supplanting Hermes for the #1 position!
(Rounded to nearest tenth)
Previous Sota: Hermes - 68.8
New Sota: Puffin - 69.9 (+1.1)
note: After release, Puffin has since had its average GPT4All score beaten by 0.1%, by Nous' very own Model Hermes-2!
Latest SOTA w/ Hermes 2- 70.0 (+0.1 over Puffins 69.9 score)
That being said, Puffin supplants Hermes-2 for the #1 spot in Arc-E, HellaSwag and Winogrande!
Puffin also perfectly ties with Hermes in PIQA, however Hermes-2 still excels in much of Big Bench and AGIEval, so it's highly reccomended you give it a try as well!
GPT4all :
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4983|± |0.0146|
| | |acc_norm|0.5068|± |0.0146|
|arc_easy | 0|acc |0.7980|± |0.0082|
| | |acc_norm|0.7757|± |0.0086|
|boolq | 1|acc |0.8150|± |0.0068|
|hellaswag | 0|acc |0.6132|± |0.0049|
| | |acc_norm|0.8043|± |0.0040|
|openbookqa | 0|acc |0.3560|± |0.0214|
| | |acc_norm|0.4560|± |0.0223|
|piqa | 0|acc |0.7954|± |0.0094|
| | |acc_norm|0.8069|± |0.0092|
|winogrande | 0|acc |0.7245|± |0.0126|
```
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5368|± |0.0363|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7127|± |0.0236|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3023|± |0.0286|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1003|± |0.0159|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2520|± |0.0194|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.1743|± |0.0143|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4200|± |0.0285|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2900|± |0.0203|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5430|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4442|± |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2074|± |0.0128|
|bigbench_snarks | 0|multiple_choice_grade|0.5083|± |0.0373|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.4970|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3260|± |0.0148|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2136|± |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1326|± |0.0081|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4200|± |0.0285|
```
AGI Eval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2283|± |0.0264|
| | |acc_norm|0.2244|± |0.0262|
|agieval_logiqa_en | 0|acc |0.2780|± |0.0176|
| | |acc_norm|0.3164|± |0.0182|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2043|± |0.0266|
|agieval_lsat_lr | 0|acc |0.3392|± |0.0210|
| | |acc_norm|0.2961|± |0.0202|
|agieval_lsat_rc | 0|acc |0.4387|± |0.0303|
| | |acc_norm|0.3569|± |0.0293|
|agieval_sat_en | 0|acc |0.5874|± |0.0344|
| | |acc_norm|0.5194|± |0.0349|
|agieval_sat_en_without_passage| 0|acc |0.4223|± |0.0345|
| | |acc_norm|0.3447|± |0.0332|
|agieval_sat_math | 0|acc |0.3364|± |0.0319|
| | |acc_norm|0.2773|± |0.0302|
```
|
{"base_model": "NousResearch/Redmond-Puffin-13B", "datasets": ["LDJnr/Puffin"], "language": ["eng"], "license": ["mit"], "model_name": "Redmond Puffin 13B V1.3", "tags": ["llama-2", "sft"], "inference": false, "model_creator": "NousResearch", "model_type": "llama", "prompt_template": "### human: {prompt}\n\n### response:\n", "quantized_by": "TheBloke"}
|
task
|
[
"TRANSLATION"
] | 43,538 |
IAmSkyDra/BARTBana_Translation_v01
|
IAmSkyDra
|
text2text-generation
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:vinai/bartpho-syllable",
"base_model:finetune:vinai/bartpho-syllable",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-22T02:59:58Z |
2025-01-22T04:47:34+00:00
| 4 | 0 |
---
base_model: vinai/bartpho-syllable
library_name: transformers
license: mit
metrics:
- sacrebleu
tags:
- generated_from_trainer
model-index:
- name: BARTBana_Translation_v01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BARTBana_Translation_v01
This model is a fine-tuned version of [vinai/bartpho-syllable](https://huggingface.co/vinai/bartpho-syllable) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4506
- Sacrebleu: 11.0518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|
| 0.7912 | 1.0 | 742 | 0.6747 | 2.7391 |
| 0.6623 | 2.0 | 1484 | 0.5821 | 5.7331 |
| 0.5662 | 3.0 | 2226 | 0.5306 | 7.8025 |
| 0.5337 | 4.0 | 2968 | 0.5027 | 8.6825 |
| 0.4913 | 5.0 | 3710 | 0.4858 | 9.4253 |
| 0.4711 | 6.0 | 4452 | 0.4725 | 9.8687 |
| 0.4399 | 7.0 | 5194 | 0.4638 | 10.2200 |
| 0.4208 | 8.0 | 5936 | 0.4584 | 10.3954 |
| 0.4096 | 9.0 | 6678 | 0.4533 | 10.7019 |
| 0.3924 | 10.0 | 7420 | 0.4509 | 10.8096 |
| 0.3796 | 11.0 | 8162 | 0.4499 | 10.7237 |
| 0.3704 | 12.0 | 8904 | 0.4502 | 10.9647 |
| 0.3618 | 13.0 | 9646 | 0.4494 | 10.9840 |
| 0.3556 | 14.0 | 10388 | 0.4496 | 11.0433 |
| 0.3476 | 15.0 | 11130 | 0.4506 | 11.0518 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
| null |
TBD
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BARTBana_Translation_v01
This model is a fine-tuned version of [vinai/bartpho-syllable](https://huggingface.co/vinai/bartpho-syllable) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4506
- Sacrebleu: 11.0518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|
| 0.7912 | 1.0 | 742 | 0.6747 | 2.7391 |
| 0.6623 | 2.0 | 1484 | 0.5821 | 5.7331 |
| 0.5662 | 3.0 | 2226 | 0.5306 | 7.8025 |
| 0.5337 | 4.0 | 2968 | 0.5027 | 8.6825 |
| 0.4913 | 5.0 | 3710 | 0.4858 | 9.4253 |
| 0.4711 | 6.0 | 4452 | 0.4725 | 9.8687 |
| 0.4399 | 7.0 | 5194 | 0.4638 | 10.2200 |
| 0.4208 | 8.0 | 5936 | 0.4584 | 10.3954 |
| 0.4096 | 9.0 | 6678 | 0.4533 | 10.7019 |
| 0.3924 | 10.0 | 7420 | 0.4509 | 10.8096 |
| 0.3796 | 11.0 | 8162 | 0.4499 | 10.7237 |
| 0.3704 | 12.0 | 8904 | 0.4502 | 10.9647 |
| 0.3618 | 13.0 | 9646 | 0.4494 | 10.9840 |
| 0.3556 | 14.0 | 10388 | 0.4496 | 11.0433 |
| 0.3476 | 15.0 | 11130 | 0.4506 | 11.0518 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
{"base_model": "vinai/bartpho-syllable", "library_name": "transformers", "license": "mit", "metrics": ["sacrebleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "BARTBana_Translation_v01", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 43,539 |
gokulsrinivasagan/bert_base_lda_20_cola
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_20",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_20",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-22T04:50:36Z |
2024-11-22T05:02:07+00:00
| 5 | 0 |
---
base_model: gokulsrinivasagan/bert_base_lda_20
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- matthews_correlation
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_base_lda_20_cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.0
name: Matthews Correlation
- type: accuracy
value: 0.6912751793861389
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_20_cola
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_20](https://huggingface.co/gokulsrinivasagan/bert_base_lda_20) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6174
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:--------:|:---------------:|:--------------------:|
| 0.6728 | 1.0 | 90 | 0.6913 | 0.6253 | 0.0 |
| 0.6083 | 2.0 | 180 | 0.6913 | 0.6233 | 0.0 |
| 0.61 | 3.0 | 270 | 0.6913 | 0.6210 | 0.0 |
| 0.617 | 4.0 | 360 | 0.6913 | 0.6191 | 0.0 |
| 0.6179 | 5.0 | 450 | 0.6197 | 0.0 | 0.6913 |
| 0.6127 | 6.0 | 540 | 0.6176 | 0.0 | 0.6913 |
| 0.6086 | 7.0 | 630 | 0.6189 | 0.0 | 0.6913 |
| 0.611 | 8.0 | 720 | 0.6254 | 0.0 | 0.6913 |
| 0.6077 | 9.0 | 810 | 0.6231 | 0.0 | 0.6913 |
| 0.61 | 10.0 | 900 | 0.6174 | 0.0 | 0.6913 |
| 0.6104 | 11.0 | 990 | 0.6194 | 0.0 | 0.6913 |
| 0.6092 | 12.0 | 1080 | 0.6194 | 0.0 | 0.6913 |
| 0.6101 | 13.0 | 1170 | 0.6179 | 0.0 | 0.6913 |
| 0.6071 | 14.0 | 1260 | 0.6192 | 0.0 | 0.6913 |
| 0.6083 | 15.0 | 1350 | 0.6183 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_20_cola
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_20](https://huggingface.co/gokulsrinivasagan/bert_base_lda_20) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6174
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:--------:|:---------------:|:--------------------:|
| 0.6728 | 1.0 | 90 | 0.6913 | 0.6253 | 0.0 |
| 0.6083 | 2.0 | 180 | 0.6913 | 0.6233 | 0.0 |
| 0.61 | 3.0 | 270 | 0.6913 | 0.6210 | 0.0 |
| 0.617 | 4.0 | 360 | 0.6913 | 0.6191 | 0.0 |
| 0.6179 | 5.0 | 450 | 0.6197 | 0.0 | 0.6913 |
| 0.6127 | 6.0 | 540 | 0.6176 | 0.0 | 0.6913 |
| 0.6086 | 7.0 | 630 | 0.6189 | 0.0 | 0.6913 |
| 0.611 | 8.0 | 720 | 0.6254 | 0.0 | 0.6913 |
| 0.6077 | 9.0 | 810 | 0.6231 | 0.0 | 0.6913 |
| 0.61 | 10.0 | 900 | 0.6174 | 0.0 | 0.6913 |
| 0.6104 | 11.0 | 990 | 0.6194 | 0.0 | 0.6913 |
| 0.6092 | 12.0 | 1080 | 0.6194 | 0.0 | 0.6913 |
| 0.6101 | 13.0 | 1170 | 0.6179 | 0.0 | 0.6913 |
| 0.6071 | 14.0 | 1260 | 0.6192 | 0.0 | 0.6913 |
| 0.6083 | 15.0 | 1350 | 0.6183 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/bert_base_lda_20", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["matthews_correlation", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_base_lda_20_cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.0, "name": "Matthews Correlation"}, {"type": "accuracy", "value": 0.6912751793861389, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,540 |
DFKI/glove-jv-cc100
|
DFKI
| null |
[
"embeddings",
"glove",
"cc100",
"jv",
"arxiv:2409.18193",
"license:cc-by-sa-4.0",
"region:us"
] | 2024-10-03T17:21:37Z |
2024-12-12T09:28:53+00:00
| 0 | 0 |
---
language: jv
license: cc-by-sa-4.0
tags:
- embeddings
- glove
- cc100
---
# CC100 GloVe Embeddings for JV Language
## Model Description
- **Language:** jv
- **Embedding Algorithm:** GloVe (Global Vectors for Word Representation)
- **Vocabulary Size:** 158412
- **Vector Dimensions:** 300
- **Training Data:** CC100 dataset
## Training Information
We trained GloVe embeddings using the original C code. The model was trained by stochastically sampling nonzero elements from the co-occurrence matrix, over 100 iterations, to produce 300-dimensional vectors. We used a context window of ten words to the left and ten words to the right. Words with fewer than 5 co-occurrences were excluded for languages with over 1 million tokens in the training data, and the threshold was set to 2 for languages with smaller datasets.
We used data from CC100 for training the static word embeddings. We set xmax = 100, α = 3/4, and used AdaGrad optimization with an initial learning rate of 0.05.
## Usage
These embeddings can be used for various NLP tasks such as text classification, named entity recognition, and as input features for neural networks.
## Citation
If you use these embeddings in your research, please cite:
```bibtex
@misc{gurgurov2024gremlinrepositorygreenbaseline,
title={GrEmLIn: A Repository of Green Baseline Embeddings for 87 Low-Resource Languages Injected with Multilingual Graph Knowledge},
author={Daniil Gurgurov and Rishu Kumar and Simon Ostermann},
year={2024},
eprint={2409.18193},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.18193},
}
```
## License
These embeddings are released under the [CC-BY-SA 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/).
| null |
Non_BioNLP
|
# CC100 GloVe Embeddings for JV Language
## Model Description
- **Language:** jv
- **Embedding Algorithm:** GloVe (Global Vectors for Word Representation)
- **Vocabulary Size:** 158412
- **Vector Dimensions:** 300
- **Training Data:** CC100 dataset
## Training Information
We trained GloVe embeddings using the original C code. The model was trained by stochastically sampling nonzero elements from the co-occurrence matrix, over 100 iterations, to produce 300-dimensional vectors. We used a context window of ten words to the left and ten words to the right. Words with fewer than 5 co-occurrences were excluded for languages with over 1 million tokens in the training data, and the threshold was set to 2 for languages with smaller datasets.
We used data from CC100 for training the static word embeddings. We set xmax = 100, α = 3/4, and used AdaGrad optimization with an initial learning rate of 0.05.
## Usage
These embeddings can be used for various NLP tasks such as text classification, named entity recognition, and as input features for neural networks.
## Citation
If you use these embeddings in your research, please cite:
```bibtex
@misc{gurgurov2024gremlinrepositorygreenbaseline,
title={GrEmLIn: A Repository of Green Baseline Embeddings for 87 Low-Resource Languages Injected with Multilingual Graph Knowledge},
author={Daniil Gurgurov and Rishu Kumar and Simon Ostermann},
year={2024},
eprint={2409.18193},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.18193},
}
```
## License
These embeddings are released under the [CC-BY-SA 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/).
|
{"language": "jv", "license": "cc-by-sa-4.0", "tags": ["embeddings", "glove", "cc100"]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION",
"TEXT_CLASSIFICATION"
] | 43,541 |
weaverlabs/bayard-1
|
weaverlabs
|
summarization
|
[
"transformers",
"pytorch",
"research",
"LGBTQ+ research",
"summarization",
"en",
"dataset:weaverlabs/bayardoneconversations",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2024-04-19T11:25:02Z |
2024-04-19T11:37:30+00:00
| 28 | 0 |
---
datasets:
- weaverlabs/bayardoneconversations
language:
- en
license: mit
metrics:
- accuracy
- character
pipeline_tag: summarization
tags:
- research
- LGBTQ+ research
---
| null |
Non_BioNLP
|
{"datasets": ["weaverlabs/bayardoneconversations"], "language": ["en"], "license": "mit", "metrics": ["accuracy", "character"], "pipeline_tag": "summarization", "tags": ["research", "LGBTQ+ research"]}
|
task
|
[
"SUMMARIZATION"
] | 43,542 |
|
blockblockblock/Llama3-8B-Instruct-Replete-Adapted-bpw6-exl2
|
blockblockblock
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"dataset:Replete-AI/code_bagel_hermes-2.5",
"dataset:Replete-AI/code_bagel",
"dataset:Replete-AI/OpenHermes-2.5-Uncensored",
"dataset:teknium/OpenHermes-2.5",
"dataset:layoric/tiny-codes-alpaca",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:ajibawa-2023/Code-290k-ShareGPT",
"dataset:TIGER-Lab/MathInstruct",
"dataset:chargoddard/commitpack-ft-instruct-rated",
"dataset:iamturun/code_instructions_120k_alpaca",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"dataset:coseal/CodeUltraFeedback_binarized",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:CyberNative/Code_Vulnerability_Security_DPO",
"dataset:jondurbin/airoboros-2.2",
"dataset:camel-ai",
"dataset:lmsys/lmsys-chat-1m",
"dataset:CollectiveCognition/chats-data-2023-09-22",
"dataset:CoT-Alpaca-GPT4",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:teknium/GPT4-LLM-Cleaned",
"dataset:GPTeacher",
"dataset:OpenGPT",
"dataset:meta-math/MetaMathQA",
"dataset:Open-Orca/SlimOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:Unnatural-Instructions-GPT4",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | 2024-07-05T22:43:42Z |
2024-07-05T22:50:33+00:00
| 9 | 0 |
---
datasets:
- Replete-AI/code_bagel_hermes-2.5
- Replete-AI/code_bagel
- Replete-AI/OpenHermes-2.5-Uncensored
- teknium/OpenHermes-2.5
- layoric/tiny-codes-alpaca
- glaiveai/glaive-code-assistant-v3
- ajibawa-2023/Code-290k-ShareGPT
- TIGER-Lab/MathInstruct
- chargoddard/commitpack-ft-instruct-rated
- iamturun/code_instructions_120k_alpaca
- ise-uiuc/Magicoder-Evol-Instruct-110K
- cognitivecomputations/dolphin-coder
- nickrosh/Evol-Instruct-Code-80k-v1
- coseal/CodeUltraFeedback_binarized
- glaiveai/glaive-function-calling-v2
- CyberNative/Code_Vulnerability_Security_DPO
- jondurbin/airoboros-2.2
- camel-ai
- lmsys/lmsys-chat-1m
- CollectiveCognition/chats-data-2023-09-22
- CoT-Alpaca-GPT4
- WizardLM/WizardLM_evol_instruct_70k
- WizardLM/WizardLM_evol_instruct_V2_196k
- teknium/GPT4-LLM-Cleaned
- GPTeacher
- OpenGPT
- meta-math/MetaMathQA
- Open-Orca/SlimOrca
- garage-bAInd/Open-Platypus
- anon8231489123/ShareGPT_Vicuna_unfiltered
- Unnatural-Instructions-GPT4
license: other
license_name: llama-3
license_link: https://llama.meta.com/llama3/license/
tags:
- text-generation-inference
- transformers
- unsloth
- llama
model-index:
- name: Replete-Coder-llama3-8b
results:
- task:
type: text-generation
name: HumanEval
dataset:
name: HumanEval
type: openai_humaneval
metrics:
- type: pass@1
value: 0.6468383584267833
name: pass@1
verified: true
- task:
type: text-generation
name: AI2 Reasoning Challenge
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: accuracy
name: normalized accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: accuracy
name: normalized accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: multiple_choice_accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: accuracy
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
---
This is the meta-llama/Meta-Llama-3-8B-Instruct model with the Replete-AI/Replete-Coder-Llama3-8B adapter applied on top of it.
This is mostly an experinment to see how the model would perform.
Links to the oringal model and adapter are bellow:
Orginal model:
- https://huggingface.co/Replete-AI/Replete-Coder-Llama3-8B
Adapter:
- Coming soon
_________________________________________________________________________________________________________
# Replete-Coder-llama3-8b
Finetuned by: Rombodawg
### More than just a coding model!
Although Replete-Coder has amazing coding capabilities, its trained on vaste amount of non-coding data, fully cleaned and uncensored. Dont just use it for coding, use it for all your needs! We are truly trying to make the GPT killer!

Thank you to TensorDock for sponsoring Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b
you can check out their website for cloud compute rental below.
- https://tensordock.com
__________________________________________________________________________________________________
Replete-Coder-llama3-8b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
The Replete-Coder models (including Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b) feature the following:
- Advanced coding capabilities in over 100 coding languages
- Advanced code translation (between languages)
- Security and vulnerability prevention related coding capabilities
- General purpose use
- Uncensored use
- Function calling
- Advanced math use
- Use on low end (8b) and mobile (1.5b) platforms
Notice: Replete-Coder series of models are fine-tuned on a context window of 8192 tokens. Performance past this context window is not guaranteed.

__________________________________________________________________________________________________
You can find the 25% non-coding instruction below:
- https://huggingface.co/datasets/Replete-AI/OpenHermes-2.5-Uncensored
And the 75% coding specific instruction data below:
- https://huggingface.co/datasets/Replete-AI/code_bagel
These two datasets were combined to create the final dataset for training, which is linked below:
- https://huggingface.co/datasets/Replete-AI/code_bagel_hermes-2.5
__________________________________________________________________________________________________
## Prompt Template: Custom Alpaca
```
### System:
{}
### Instruction:
{}
### Response:
{}
```
Note: The system prompt varies in training data, but the most commonly used one is:
```
Below is an instruction that describes a task, Write a response that appropriately completes the request.
```
End token:
```
<|endoftext|>
```
__________________________________________________________________________________________________
Thank you to the community for your contributions to the Replete-AI/code_bagel_hermes-2.5 dataset. Without the participation of so many members making their datasets free and open source for any to use, this amazing AI model wouldn't be possible.
Extra special thanks to Teknium for the Open-Hermes-2.5 dataset and jondurbin for the bagel dataset and the naming idea for the code_bagel series of datasets. You can find both of their huggingface accounts linked below:
- https://huggingface.co/teknium
- https://huggingface.co/jondurbin
Another special thanks to unsloth for being the main method of training for Replete-Coder. Bellow you can find their github, as well as the special Replete-Ai secret sause (Unsloth + Qlora + Galore) colab code document that was used to train this model.
- https://github.com/unslothai/unsloth
- https://colab.research.google.com/drive/1VAaxMQJN9-78WLsPU0GWg5tEkasXoTP9?usp=sharing
__________________________________________________________________________________________________
## Join the Replete-Ai discord! We are a great and Loving community!
- https://discord.gg/ZZbnsmVnjD
| null |
Non_BioNLP
|
This is the meta-llama/Meta-Llama-3-8B-Instruct model with the Replete-AI/Replete-Coder-Llama3-8B adapter applied on top of it.
This is mostly an experinment to see how the model would perform.
Links to the oringal model and adapter are bellow:
Orginal model:
- https://huggingface.co/Replete-AI/Replete-Coder-Llama3-8B
Adapter:
- Coming soon
_________________________________________________________________________________________________________
# Replete-Coder-llama3-8b
Finetuned by: Rombodawg
### More than just a coding model!
Although Replete-Coder has amazing coding capabilities, its trained on vaste amount of non-coding data, fully cleaned and uncensored. Dont just use it for coding, use it for all your needs! We are truly trying to make the GPT killer!

Thank you to TensorDock for sponsoring Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b
you can check out their website for cloud compute rental below.
- https://tensordock.com
__________________________________________________________________________________________________
Replete-Coder-llama3-8b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
The Replete-Coder models (including Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b) feature the following:
- Advanced coding capabilities in over 100 coding languages
- Advanced code translation (between languages)
- Security and vulnerability prevention related coding capabilities
- General purpose use
- Uncensored use
- Function calling
- Advanced math use
- Use on low end (8b) and mobile (1.5b) platforms
Notice: Replete-Coder series of models are fine-tuned on a context window of 8192 tokens. Performance past this context window is not guaranteed.

__________________________________________________________________________________________________
You can find the 25% non-coding instruction below:
- https://huggingface.co/datasets/Replete-AI/OpenHermes-2.5-Uncensored
And the 75% coding specific instruction data below:
- https://huggingface.co/datasets/Replete-AI/code_bagel
These two datasets were combined to create the final dataset for training, which is linked below:
- https://huggingface.co/datasets/Replete-AI/code_bagel_hermes-2.5
__________________________________________________________________________________________________
## Prompt Template: Custom Alpaca
```
### System:
{}
### Instruction:
{}
### Response:
{}
```
Note: The system prompt varies in training data, but the most commonly used one is:
```
Below is an instruction that describes a task, Write a response that appropriately completes the request.
```
End token:
```
<|endoftext|>
```
__________________________________________________________________________________________________
Thank you to the community for your contributions to the Replete-AI/code_bagel_hermes-2.5 dataset. Without the participation of so many members making their datasets free and open source for any to use, this amazing AI model wouldn't be possible.
Extra special thanks to Teknium for the Open-Hermes-2.5 dataset and jondurbin for the bagel dataset and the naming idea for the code_bagel series of datasets. You can find both of their huggingface accounts linked below:
- https://huggingface.co/teknium
- https://huggingface.co/jondurbin
Another special thanks to unsloth for being the main method of training for Replete-Coder. Bellow you can find their github, as well as the special Replete-Ai secret sause (Unsloth + Qlora + Galore) colab code document that was used to train this model.
- https://github.com/unslothai/unsloth
- https://colab.research.google.com/drive/1VAaxMQJN9-78WLsPU0GWg5tEkasXoTP9?usp=sharing
__________________________________________________________________________________________________
## Join the Replete-Ai discord! We are a great and Loving community!
- https://discord.gg/ZZbnsmVnjD
|
{"datasets": ["Replete-AI/code_bagel_hermes-2.5", "Replete-AI/code_bagel", "Replete-AI/OpenHermes-2.5-Uncensored", "teknium/OpenHermes-2.5", "layoric/tiny-codes-alpaca", "glaiveai/glaive-code-assistant-v3", "ajibawa-2023/Code-290k-ShareGPT", "TIGER-Lab/MathInstruct", "chargoddard/commitpack-ft-instruct-rated", "iamturun/code_instructions_120k_alpaca", "ise-uiuc/Magicoder-Evol-Instruct-110K", "cognitivecomputations/dolphin-coder", "nickrosh/Evol-Instruct-Code-80k-v1", "coseal/CodeUltraFeedback_binarized", "glaiveai/glaive-function-calling-v2", "CyberNative/Code_Vulnerability_Security_DPO", "jondurbin/airoboros-2.2", "camel-ai", "lmsys/lmsys-chat-1m", "CollectiveCognition/chats-data-2023-09-22", "CoT-Alpaca-GPT4", "WizardLM/WizardLM_evol_instruct_70k", "WizardLM/WizardLM_evol_instruct_V2_196k", "teknium/GPT4-LLM-Cleaned", "GPTeacher", "OpenGPT", "meta-math/MetaMathQA", "Open-Orca/SlimOrca", "garage-bAInd/Open-Platypus", "anon8231489123/ShareGPT_Vicuna_unfiltered", "Unnatural-Instructions-GPT4"], "license": "other", "license_name": "llama-3", "license_link": "https://llama.meta.com/llama3/license/", "tags": ["text-generation-inference", "transformers", "unsloth", "llama"], "model-index": [{"name": "Replete-Coder-llama3-8b", "results": [{"task": {"type": "text-generation", "name": "HumanEval"}, "dataset": {"name": "HumanEval", "type": "openai_humaneval"}, "metrics": [{"type": "pass@1", "value": 0.6468383584267833, "name": "pass@1", "verified": true}]}, {"task": {"type": "text-generation", "name": "AI2 Reasoning Challenge"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "accuracy", "name": "normalized accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "accuracy", "name": "normalized accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "accuracy", "name": "accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "multiple_choice_accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "accuracy", "name": "accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "accuracy", "name": "accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}]}]}
|
task
|
[
"TRANSLATION"
] | 43,544 |
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-28T18:54:18Z |
2024-05-28T18:54:57+00:00
| 6 | 0 |
---
datasets:
- fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| null |
Non_BioNLP
|
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
{"datasets": ["fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-34914559", "allenai/c4"], "language": ["en", "en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,545 |
smartcat/T5-GenQ-TDE-v1
|
smartcat
|
text2text-generation
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"e-commerce",
"query-generation",
"en",
"dataset:smartcat/Amazon-2023-GenQ",
"base_model:BeIR/query-gen-msmarco-t5-base-v1",
"base_model:finetune:BeIR/query-gen-msmarco-t5-base-v1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-21T15:10:29Z |
2025-03-11T11:36:11+00:00
| 9 | 0 |
---
base_model:
- BeIR/query-gen-msmarco-t5-base-v1
datasets:
- smartcat/Amazon-2023-GenQ
language:
- en
library_name: transformers
license: mit
metrics:
- rouge
pipeline_tag: text2text-generation
tags:
- e-commerce
- query-generation
---
# Model Card for T5-GenQ-TDE-v1
🤖 ✨ 🔍 Generate precise, realistic user-focused search queries from product text 🛒 🚀 📊
### Model Description
- **Model Name:** Fine-Tuned Query-Generation Model
- **Model type:** Text-to-Text Transformer
- **Finetuned from model:** [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1)
- **Dataset**: [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ)
- **Primary Use Case**: Generating accurate and relevant search queries from item descriptions
- **Repository:** [smartcat-labs/product2query](https://github.com/smartcat-labs/product2query)
### Model variations
<table border="1" class="dataframe">
<tr style="text-align: center;">
<th>Model</th>
<th>ROUGE-1</th>
<th>ROUGE-2</th>
<th>ROUGE-L</th>
<th>ROUGE-Lsum</th>
</tr>
<tr>
<td><b><a href="https://huggingface.co/smartcat/T5-GenQ-T-v1">T5-GenQ-T-v1</a></b></td>
<td>75.2151</td>
<td>54.8735</td>
<td><b>74.5142</b></td>
<td>74.5262</td>
</tr>
<tr>
<td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TD-v1">T5-GenQ-TD-v1</a></b></td>
<td>78.2570</td>
<td>58.9586</td>
<td><b>77.5308</b></td>
<td>77.5466</td>
</tr>
<tr>
<td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TDE-v1">T5-GenQ-TDE-v1</a></b></td>
<td>76.9075</td>
<td>57.0980</td>
<td><b>76.1464</b></td>
<td>76.1502</td>
</tr>
<tr>
<td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TDC-v1">T5-GenQ-TDC-v1</a> (best)</b></td>
<td>80.0754</td>
<td>61.5974</td>
<td><b>79.3557</b></td>
<td>79.3427</td>
</tr>
</table>
### Uses
This model is designed to improve e-commerce search functionality by generating user-friendly search queries based on product descriptions. It is particularly suited for applications where product descriptions are the primary input, and the goal is to create concise, descriptive queries that align with user search intent.
### Examples of Use:
<li>Generating search queries for product indexing.</li>
<li>Enhancing product discoverability in e-commerce search engines.</li>
<li>Automating query generation for catalog management.</li>
### Comparison of ROUGE scores:
<table border="1">
<thead>
<tr>
<th>Model</th>
<th>ROUGE-1</th>
<th>ROUGE-2</th>
<th>ROUGE-L</th>
<th>ROUGE-Lsum</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5-GenQ-TDE-v1</td>
<td>74.71</td>
<td>54.31</td>
<td>74.06</td>
<td>74.06</td>
</tr>
<tr>
<td>query-gen-msmarco-t5-base-v1</td>
<td>37.63</td>
<td>17.40</td>
<td>36.69</td>
<td>36.69</td>
</tr>
</tbody>
</table>
**Note:** This evaluation is done after training, based on the test split of the [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ/viewer/default/test?views%5B%5D=test).
### Examples
<details><summary>Expand to see the table with examples</summary>
<table border="1" text-align: center>
<thead>
<tr>
<th style="width: 25%;" >Input Text</th>
<th style="width: 25%;">Target Query</th>
<th>Before Fine-tuning</th>
<th>After Fine-tuning</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>KIDSCOOL SPACE Baby Denim Overall,Hooded Little Kid Jean Jumper</strong></td>
<td>KIDSCOOL SPACE Baby Denim Overall</td>
<td>what is kidscool space denim</td>
<td>baby denim overalls</td>
</tr>
<tr>
<td><strong>NCAA Mens Long Sleeve Shirt Arm Team</strong>
Show your Mountaineers pride with this West Virginia long sleeve shirt. Its soft cotton material and unique graphics make this a great addition to any West Virginia apparel collection. Features: -100% cotton -Ribbed and double stitched collar and sleeves -Officially licensed West Virginia University long sleeve shirt</td>
<td>West Virginia long sleeve shirt</td>
<td>wvu long sleeve shirt</td>
<td>West Virginia long sleeve shirt</td>
</tr>
<tr>
<td><strong>The Body Shop Mattifying Lotion (Vegan), Tea Tree, 1.69 Fl Oz</strong>
Product Description
Made with community trade tea tree oil, The Body Shop's Tea Tree Mattifying Lotion provides lightweight hydration, helps tackles excess oil and visibly reduces the appearance of blemishes, revealing a clearer looking, mattifed finish. 100 percent vegan, suitable for blemish prone skin.
From the Manufacturer
Made with Community Trade tea tree oil, The Body Shop's Tea Tree Mattifying Lotion provides lightweight hydration, helps tackles excess oil and visibly reduces the appearance of blemishes, revealing a clearer-looking, mattifed finish. 100% vegan, suitable for blemish-prone skin. Paraben-free. Gluten-free. 100% Vegan.</td>
<td>Tea Tree Mattifying Lotion</td>
<td>what is body shop tea tree lotion</td>
<td>vegan matte lotion</td>
</tr>
</tbody>
</table>
</details>
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("smartcat/T5-GenQ-TDE-v1")
tokenizer = AutoTokenizer.from_pretrained("smartcat/T5-GenQ-TDE-v1")
description = "Silver-colored cuff with embossed braid pattern. Made of brass, flexible to fit wrist."
inputs = tokenizer(description, return_tensors="pt", padding=True, truncation=True)
generated_ids = model.generate(inputs["input_ids"], max_length=30, num_beams=4, early_stopping=True)
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
```
## Training Details
### Training Data
The model was trained on the [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ) dataset, which consists of user-like
queries generated from product descriptions. The dataset was created using Claude Haiku 3,
incorporating key product attributes such as the title, description, and images to ensure relevant and realistic queries. For more information, read the Dataset Card. 😊
### Preprocessing
- Trained on titles + descriptions of the products and a duplicate set of products with titles only
- Tokenized using T5’s default tokenizer with truncation to handle long text.
### Training Hyperparameters
<ul>
<li><strong>max_input_length:</strong> 512</li>
<li><strong>max_target_length:</strong> 30</li>
<li><strong>batch_size:</strong> 48</li>
<li><strong>num_train_epochs:</strong> 8</li>
<li><strong>evaluation_strategy:</strong> epoch</li>
<li><strong>save_strategy:</strong> epoch</li>
<li><strong>learning_rate:</strong> 5.6e-05</li>
<li><strong>weight_decay:</strong> 0.01 </li>
<li><strong>predict_with_generate:</strong> true</li>
<li><strong>load_best_model_at_end:</strong> true</li>
<li><strong>metric_for_best_model:</strong> eval_rougeL</li>
<li><strong>greater_is_better:</strong> true</li>
<li><strong>logging_strategy:</strong> epoch</li>
</ul>
### Train time: 25.62 hrs
### Hardware
A6000 GPU:
- Memory Size: 48 GB
- Memory Type: GDDR6
- CUDA: 8.6
### Metrics
### Metrics
**[ROUGE](https://en.wikipedia.org/wiki/ROUGE_(metric))**, or **R**ecall-**O**riented **U**nderstudy for **G**isting **E**valuation, is a set of metrics used for evaluating automatic summarization and machine translation in NLP. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation. ROUGE metrics range between 0 and 1, with higher scores indicating higher similarity between the automatically produced summary and the reference.
In our evaluation, ROUGE scores are scaled to resemble percentages for better interpretability. The metric used in the training was ROUGE-L.
<table border="1">
<thead>
<tr>
<th>Epoch</th>
<th>Step</th>
<th>Loss</th>
<th>Grad Norm</th>
<th>Learning Rate</th>
<th>Eval Loss</th>
<th>ROUGE-1</th>
<th>ROUGE-2</th>
<th>ROUGE-L</th>
<th>ROUGE-Lsum</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.0</td>
<td>8569</td>
<td>0.7955</td>
<td>2.9784</td>
<td>4.9e-05</td>
<td>0.6501</td>
<td>75.3001</td>
<td>55.0195</td>
<td>74.6632</td>
<td>74.6678</td>
</tr>
<tr>
<td>2.0</td>
<td>17138</td>
<td>0.6595</td>
<td>3.2943</td>
<td>4.2e-05</td>
<td>0.6293</td>
<td>76.2210</td>
<td>56.2050</td>
<td>75.5728</td>
<td>75.5670</td>
</tr>
<tr>
<td>3.0</td>
<td>25707</td>
<td>0.5982</td>
<td>4.0392</td>
<td>3.5e-05</td>
<td>0.6207</td>
<td>76.5493</td>
<td>56.7006</td>
<td>75.8775</td>
<td>75.8796</td>
</tr>
<tr>
<td>4.0</td>
<td>34276</td>
<td>0.5552</td>
<td>2.8237</td>
<td>2.8e-05</td>
<td>0.6267</td>
<td>76.5433</td>
<td>56.7025</td>
<td>75.8319</td>
<td>75.8343</td>
</tr>
<tr>
<td>5.0</td>
<td>42845</td>
<td>0.5225</td>
<td>2.7701</td>
<td>2.1e-05</td>
<td>0.6303</td>
<td>76.7192</td>
<td>56.9090</td>
<td>75.9884</td>
<td>75.9972</td>
</tr>
<tr>
<td>6.0</td>
<td>51414</td>
<td>0.4974</td>
<td>3.1344</td>
<td>1.4e-05</td>
<td>0.6316</td>
<td>76.8851</td>
<td>57.1349</td>
<td>76.1420</td>
<td>76.1484</td>
</tr>
<tr>
<td>7.0</td>
<td>59983</td>
<td>0.4798</td>
<td>3.5027</td>
<td>7e-06</td>
<td>0.6355</td>
<td>76.8884</td>
<td>57.1055</td>
<td>76.1433</td>
<td>76.1501</td>
</tr>
<tr>
<td>8.0</td>
<td>68552</td>
<td>0.4674</td>
<td>4.5172</td>
<td>0.0</td>
<td>0.6408</td>
<td>76.9075</td>
<td>57.0980</td>
<td>76.1464</td>
<td>76.1502</td>
</tr>
</tbody>
</table>
<style>
.model-analysis table {
width: 100%;
border-collapse: collapse;
}
.model-analysis td {
padding: 10px;
vertical-align: middle;
}
.model-analysis img {
width: auto; /* Maintain aspect ratio */
display: block;
margin: 0 auto;
max-height: 750px; /* Default height for most images */
}
</style>
<div class="model-analysis">
### Model Analysis
<details><summary>Average scores by model </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="average_scores_by_model.png" alt="image"></td>
<td>
```checkpoint-68552``` (T5-GenQ-TDE-v1) outperforms ```query-gen-msmarco-t5-base-v1``` across all ROUGE metrics.
The most significant difference is in ROUGE-2, where ```checkpoint-68552``` scores 54.32% vs. 17.40% for the baseline model.</td></tr>
</table>
</details>
<details><summary>Density comparison </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="density_comparison.png" alt="image"></td>
<td>
```checkpoint-68552``` (T5-GenQ-TDE-v1) peaks near 100%, showing strong text overlap.
```query-gen-msmarco-t5-base-v1``` has a wider distribution, with peaks in the low to mid-score range (10-40%), indicating greater variability but lower precision.
ROUGE-2 has a high density at 0% for the baseline model, meaning many outputs lack bigram overlap.</td></tr>
</table>
</details>
<details><summary>Histogram comparison </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="histogram_comparison.png" alt="image"></td>
<td>
```checkpoint-68552``` (T5-GenQ-TDE-v1, blue) trends toward higher ROUGE scores, with a peak at 100%.
```query-gen-msmarco-t5-base-v1``` (orange) has more low-score peaks, especially in ROUGE-2, reinforcing its lower precision.
These histograms confirm ```checkpoint-68552``` consistently generates more accurate text.</td></tr>
</table>
</details>
<details><summary>Scores by generated query length </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="group_sizes.png" alt="image"></td>
<td>
Stable ROUGE scores (Sizes 3-9): All metrics remain consistently high.
Score spike at 2 words: Indicates better alignment for short phrases, followed by stability.
Score differences remain near zero for most sizes, meaning consistent model performance across phrase lengths.</td></tr>
</table>
</details>
<details><summary>Semantic similarity distribution </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="semantic_similarity_distribution.png" alt="image"></td>
<td>
This histogram visualizes the distribution of cosine similarity scores, which measure the semantic similarity between paired texts (generated query and target query).
A strong peak near 1.0 suggests most pairs are highly semantically similar.
Low similarity scores (0.0–0.4) are rare, meaning the dataset contains mostly closely related text pairs.</td></tr>
</table>
</details>
<details><summary>Semantic similarity score against ROUGE scores </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="similarity_vs_rouge.png" alt="image"></td>
<td>Higher similarity → Higher ROUGE scores, indicating strong correlation.
ROUGE-1 & ROUGE-L show the strongest alignment, while ROUGE-2 has more variation.
Some low-similarity outliers still achieve moderate ROUGE scores, suggesting surface-level overlap without deep semantic alignment.
</td></tr>
</table>
</details>
</div>
## More Information
- Please visit the [GitHub Repository](https://github.com/smartcat-labs/product2query)
## Authors
- Mentor: [Milutin Studen](https://www.linkedin.com/in/milutin-studen/)
- Engineers: [Petar Surla](https://www.linkedin.com/in/petar-surla-6448b6269/), [Andjela Radojevic](https://www.linkedin.com/in/an%C4%91ela-radojevi%C4%87-936197196/)
## Model Card Contact
For questions, please open an issue on the [GitHub Repository](https://github.com/smartcat-labs/product2query)
| null |
Non_BioNLP
|
# Model Card for T5-GenQ-TDE-v1
🤖 ✨ 🔍 Generate precise, realistic user-focused search queries from product text 🛒 🚀 📊
### Model Description
- **Model Name:** Fine-Tuned Query-Generation Model
- **Model type:** Text-to-Text Transformer
- **Finetuned from model:** [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1)
- **Dataset**: [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ)
- **Primary Use Case**: Generating accurate and relevant search queries from item descriptions
- **Repository:** [smartcat-labs/product2query](https://github.com/smartcat-labs/product2query)
### Model variations
<table border="1" class="dataframe">
<tr style="text-align: center;">
<th>Model</th>
<th>ROUGE-1</th>
<th>ROUGE-2</th>
<th>ROUGE-L</th>
<th>ROUGE-Lsum</th>
</tr>
<tr>
<td><b><a href="https://huggingface.co/smartcat/T5-GenQ-T-v1">T5-GenQ-T-v1</a></b></td>
<td>75.2151</td>
<td>54.8735</td>
<td><b>74.5142</b></td>
<td>74.5262</td>
</tr>
<tr>
<td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TD-v1">T5-GenQ-TD-v1</a></b></td>
<td>78.2570</td>
<td>58.9586</td>
<td><b>77.5308</b></td>
<td>77.5466</td>
</tr>
<tr>
<td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TDE-v1">T5-GenQ-TDE-v1</a></b></td>
<td>76.9075</td>
<td>57.0980</td>
<td><b>76.1464</b></td>
<td>76.1502</td>
</tr>
<tr>
<td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TDC-v1">T5-GenQ-TDC-v1</a> (best)</b></td>
<td>80.0754</td>
<td>61.5974</td>
<td><b>79.3557</b></td>
<td>79.3427</td>
</tr>
</table>
### Uses
This model is designed to improve e-commerce search functionality by generating user-friendly search queries based on product descriptions. It is particularly suited for applications where product descriptions are the primary input, and the goal is to create concise, descriptive queries that align with user search intent.
### Examples of Use:
<li>Generating search queries for product indexing.</li>
<li>Enhancing product discoverability in e-commerce search engines.</li>
<li>Automating query generation for catalog management.</li>
### Comparison of ROUGE scores:
<table border="1">
<thead>
<tr>
<th>Model</th>
<th>ROUGE-1</th>
<th>ROUGE-2</th>
<th>ROUGE-L</th>
<th>ROUGE-Lsum</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5-GenQ-TDE-v1</td>
<td>74.71</td>
<td>54.31</td>
<td>74.06</td>
<td>74.06</td>
</tr>
<tr>
<td>query-gen-msmarco-t5-base-v1</td>
<td>37.63</td>
<td>17.40</td>
<td>36.69</td>
<td>36.69</td>
</tr>
</tbody>
</table>
**Note:** This evaluation is done after training, based on the test split of the [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ/viewer/default/test?views%5B%5D=test).
### Examples
<details><summary>Expand to see the table with examples</summary>
<table border="1" text-align: center>
<thead>
<tr>
<th style="width: 25%;" >Input Text</th>
<th style="width: 25%;">Target Query</th>
<th>Before Fine-tuning</th>
<th>After Fine-tuning</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>KIDSCOOL SPACE Baby Denim Overall,Hooded Little Kid Jean Jumper</strong></td>
<td>KIDSCOOL SPACE Baby Denim Overall</td>
<td>what is kidscool space denim</td>
<td>baby denim overalls</td>
</tr>
<tr>
<td><strong>NCAA Mens Long Sleeve Shirt Arm Team</strong>
Show your Mountaineers pride with this West Virginia long sleeve shirt. Its soft cotton material and unique graphics make this a great addition to any West Virginia apparel collection. Features: -100% cotton -Ribbed and double stitched collar and sleeves -Officially licensed West Virginia University long sleeve shirt</td>
<td>West Virginia long sleeve shirt</td>
<td>wvu long sleeve shirt</td>
<td>West Virginia long sleeve shirt</td>
</tr>
<tr>
<td><strong>The Body Shop Mattifying Lotion (Vegan), Tea Tree, 1.69 Fl Oz</strong>
Product Description
Made with community trade tea tree oil, The Body Shop's Tea Tree Mattifying Lotion provides lightweight hydration, helps tackles excess oil and visibly reduces the appearance of blemishes, revealing a clearer looking, mattifed finish. 100 percent vegan, suitable for blemish prone skin.
From the Manufacturer
Made with Community Trade tea tree oil, The Body Shop's Tea Tree Mattifying Lotion provides lightweight hydration, helps tackles excess oil and visibly reduces the appearance of blemishes, revealing a clearer-looking, mattifed finish. 100% vegan, suitable for blemish-prone skin. Paraben-free. Gluten-free. 100% Vegan.</td>
<td>Tea Tree Mattifying Lotion</td>
<td>what is body shop tea tree lotion</td>
<td>vegan matte lotion</td>
</tr>
</tbody>
</table>
</details>
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("smartcat/T5-GenQ-TDE-v1")
tokenizer = AutoTokenizer.from_pretrained("smartcat/T5-GenQ-TDE-v1")
description = "Silver-colored cuff with embossed braid pattern. Made of brass, flexible to fit wrist."
inputs = tokenizer(description, return_tensors="pt", padding=True, truncation=True)
generated_ids = model.generate(inputs["input_ids"], max_length=30, num_beams=4, early_stopping=True)
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
```
## Training Details
### Training Data
The model was trained on the [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ) dataset, which consists of user-like
queries generated from product descriptions. The dataset was created using Claude Haiku 3,
incorporating key product attributes such as the title, description, and images to ensure relevant and realistic queries. For more information, read the Dataset Card. 😊
### Preprocessing
- Trained on titles + descriptions of the products and a duplicate set of products with titles only
- Tokenized using T5’s default tokenizer with truncation to handle long text.
### Training Hyperparameters
<ul>
<li><strong>max_input_length:</strong> 512</li>
<li><strong>max_target_length:</strong> 30</li>
<li><strong>batch_size:</strong> 48</li>
<li><strong>num_train_epochs:</strong> 8</li>
<li><strong>evaluation_strategy:</strong> epoch</li>
<li><strong>save_strategy:</strong> epoch</li>
<li><strong>learning_rate:</strong> 5.6e-05</li>
<li><strong>weight_decay:</strong> 0.01 </li>
<li><strong>predict_with_generate:</strong> true</li>
<li><strong>load_best_model_at_end:</strong> true</li>
<li><strong>metric_for_best_model:</strong> eval_rougeL</li>
<li><strong>greater_is_better:</strong> true</li>
<li><strong>logging_strategy:</strong> epoch</li>
</ul>
### Train time: 25.62 hrs
### Hardware
A6000 GPU:
- Memory Size: 48 GB
- Memory Type: GDDR6
- CUDA: 8.6
### Metrics
### Metrics
**[ROUGE](https://en.wikipedia.org/wiki/ROUGE_(metric))**, or **R**ecall-**O**riented **U**nderstudy for **G**isting **E**valuation, is a set of metrics used for evaluating automatic summarization and machine translation in NLP. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation. ROUGE metrics range between 0 and 1, with higher scores indicating higher similarity between the automatically produced summary and the reference.
In our evaluation, ROUGE scores are scaled to resemble percentages for better interpretability. The metric used in the training was ROUGE-L.
<table border="1">
<thead>
<tr>
<th>Epoch</th>
<th>Step</th>
<th>Loss</th>
<th>Grad Norm</th>
<th>Learning Rate</th>
<th>Eval Loss</th>
<th>ROUGE-1</th>
<th>ROUGE-2</th>
<th>ROUGE-L</th>
<th>ROUGE-Lsum</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.0</td>
<td>8569</td>
<td>0.7955</td>
<td>2.9784</td>
<td>4.9e-05</td>
<td>0.6501</td>
<td>75.3001</td>
<td>55.0195</td>
<td>74.6632</td>
<td>74.6678</td>
</tr>
<tr>
<td>2.0</td>
<td>17138</td>
<td>0.6595</td>
<td>3.2943</td>
<td>4.2e-05</td>
<td>0.6293</td>
<td>76.2210</td>
<td>56.2050</td>
<td>75.5728</td>
<td>75.5670</td>
</tr>
<tr>
<td>3.0</td>
<td>25707</td>
<td>0.5982</td>
<td>4.0392</td>
<td>3.5e-05</td>
<td>0.6207</td>
<td>76.5493</td>
<td>56.7006</td>
<td>75.8775</td>
<td>75.8796</td>
</tr>
<tr>
<td>4.0</td>
<td>34276</td>
<td>0.5552</td>
<td>2.8237</td>
<td>2.8e-05</td>
<td>0.6267</td>
<td>76.5433</td>
<td>56.7025</td>
<td>75.8319</td>
<td>75.8343</td>
</tr>
<tr>
<td>5.0</td>
<td>42845</td>
<td>0.5225</td>
<td>2.7701</td>
<td>2.1e-05</td>
<td>0.6303</td>
<td>76.7192</td>
<td>56.9090</td>
<td>75.9884</td>
<td>75.9972</td>
</tr>
<tr>
<td>6.0</td>
<td>51414</td>
<td>0.4974</td>
<td>3.1344</td>
<td>1.4e-05</td>
<td>0.6316</td>
<td>76.8851</td>
<td>57.1349</td>
<td>76.1420</td>
<td>76.1484</td>
</tr>
<tr>
<td>7.0</td>
<td>59983</td>
<td>0.4798</td>
<td>3.5027</td>
<td>7e-06</td>
<td>0.6355</td>
<td>76.8884</td>
<td>57.1055</td>
<td>76.1433</td>
<td>76.1501</td>
</tr>
<tr>
<td>8.0</td>
<td>68552</td>
<td>0.4674</td>
<td>4.5172</td>
<td>0.0</td>
<td>0.6408</td>
<td>76.9075</td>
<td>57.0980</td>
<td>76.1464</td>
<td>76.1502</td>
</tr>
</tbody>
</table>
<style>
.model-analysis table {
width: 100%;
border-collapse: collapse;
}
.model-analysis td {
padding: 10px;
vertical-align: middle;
}
.model-analysis img {
width: auto; /* Maintain aspect ratio */
display: block;
margin: 0 auto;
max-height: 750px; /* Default height for most images */
}
</style>
<div class="model-analysis">
### Model Analysis
<details><summary>Average scores by model </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="average_scores_by_model.png" alt="image"></td>
<td>
```checkpoint-68552``` (T5-GenQ-TDE-v1) outperforms ```query-gen-msmarco-t5-base-v1``` across all ROUGE metrics.
The most significant difference is in ROUGE-2, where ```checkpoint-68552``` scores 54.32% vs. 17.40% for the baseline model.</td></tr>
</table>
</details>
<details><summary>Density comparison </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="density_comparison.png" alt="image"></td>
<td>
```checkpoint-68552``` (T5-GenQ-TDE-v1) peaks near 100%, showing strong text overlap.
```query-gen-msmarco-t5-base-v1``` has a wider distribution, with peaks in the low to mid-score range (10-40%), indicating greater variability but lower precision.
ROUGE-2 has a high density at 0% for the baseline model, meaning many outputs lack bigram overlap.</td></tr>
</table>
</details>
<details><summary>Histogram comparison </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="histogram_comparison.png" alt="image"></td>
<td>
```checkpoint-68552``` (T5-GenQ-TDE-v1, blue) trends toward higher ROUGE scores, with a peak at 100%.
```query-gen-msmarco-t5-base-v1``` (orange) has more low-score peaks, especially in ROUGE-2, reinforcing its lower precision.
These histograms confirm ```checkpoint-68552``` consistently generates more accurate text.</td></tr>
</table>
</details>
<details><summary>Scores by generated query length </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="group_sizes.png" alt="image"></td>
<td>
Stable ROUGE scores (Sizes 3-9): All metrics remain consistently high.
Score spike at 2 words: Indicates better alignment for short phrases, followed by stability.
Score differences remain near zero for most sizes, meaning consistent model performance across phrase lengths.</td></tr>
</table>
</details>
<details><summary>Semantic similarity distribution </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="semantic_similarity_distribution.png" alt="image"></td>
<td>
This histogram visualizes the distribution of cosine similarity scores, which measure the semantic similarity between paired texts (generated query and target query).
A strong peak near 1.0 suggests most pairs are highly semantically similar.
Low similarity scores (0.0–0.4) are rare, meaning the dataset contains mostly closely related text pairs.</td></tr>
</table>
</details>
<details><summary>Semantic similarity score against ROUGE scores </summary>
<table style="width:100%"><tr>
<td style="width:65%"><img src="similarity_vs_rouge.png" alt="image"></td>
<td>Higher similarity → Higher ROUGE scores, indicating strong correlation.
ROUGE-1 & ROUGE-L show the strongest alignment, while ROUGE-2 has more variation.
Some low-similarity outliers still achieve moderate ROUGE scores, suggesting surface-level overlap without deep semantic alignment.
</td></tr>
</table>
</details>
</div>
## More Information
- Please visit the [GitHub Repository](https://github.com/smartcat-labs/product2query)
## Authors
- Mentor: [Milutin Studen](https://www.linkedin.com/in/milutin-studen/)
- Engineers: [Petar Surla](https://www.linkedin.com/in/petar-surla-6448b6269/), [Andjela Radojevic](https://www.linkedin.com/in/an%C4%91ela-radojevi%C4%87-936197196/)
## Model Card Contact
For questions, please open an issue on the [GitHub Repository](https://github.com/smartcat-labs/product2query)
|
{"base_model": ["BeIR/query-gen-msmarco-t5-base-v1"], "datasets": ["smartcat/Amazon-2023-GenQ"], "language": ["en"], "library_name": "transformers", "license": "mit", "metrics": ["rouge"], "pipeline_tag": "text2text-generation", "tags": ["e-commerce", "query-generation"]}
|
task
|
[
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | 43,546 |
SoMiyagawa/AinuTrans-2.0
|
SoMiyagawa
|
translation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"ja",
"ain",
"dataset:SoMiyagawa/autotrain-data-ainutrans2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-25T09:41:58Z |
2024-02-21T05:05:49+00:00
| 0 | 7 |
---
datasets:
- SoMiyagawa/autotrain-data-ainutrans2
language:
- ja
- ain
tags:
- translation
co2_eq_emissions:
emissions: 1086.6023091382667
---
# シサム語による説明
アイヌ語と日本語の双方向機械翻訳モデルです。
民話や叙事詩のコーパスばかり用いたので、それらに出てきそうな単語ばかり使っているなら、翻訳できます。
何かあれば、so-miyagawa at ninjal.ac.jpまでご連絡ください。
まあまあ良いSacreBLEUスコアです。chrFスコアなども計ってみたいです。詳しい人教えてください。
論文はこちらです。
So Miyagawa. 2023. Machine Translation for Highly Low-Resource Language: A Case Study of Ainu, a Critically Endangered Indigenous Language in Northern Japan. In Proceedings of the Joint 3rd International Conference on Natural Language Processing for Digital Humanities and 8th International Workshop on Computational Linguistics for Uralic Languages, pages 120–124, Tokyo, Japan. Association for Computational Linguistics.
https://aclanthology.org/2023.nlp4dh-1.14/
右のInference APIで日本語やアイヌ語を入力して試してみてください。もちろん、完璧ではないので、初・中級者の方は結果はそのまま使わず、必ず専門家やアイヌ語上級者に見てもらってください。
# 英語による説明
This is a two-way machine translation model between Ainu and Japanese.
We used only a corpus of folk tales and epic poems, so if you use only words that might appear in those, you can translate.
Please contact me at so-miyagawa at ninjal.ac.jp if you have any questions.
I have a so-so SacreBLEU score and would like to measure chrF score etc. Please let me know if you know more.
Here is the paper.
Miyagawa, So. 2023. Machine Translation for Highly Low-Resource Language: A Case Study of Ainu, a Critically Endangered Indigenous Language in Northern Japan. In Proceedings of the Joint 3rd International Conference on Natural Language Processing for Digital Humanities and 8th International Workshop In Proceedings of the Joint 3rd International Conference on Natural Language Processing for Digital Humanities and 8th International Workshop on Computational Linguistics for Uralic Languages, pages 120-124, Tokyo, Japan.
https://aclanthology.org/2023.nlp4dh-1.14/
You can try using the Inference API on the right by entering Japanese or Ainu. Of course, it is not perfect, so if you are a beginner or intermediate user, please do not use the results as they are, and be sure to ask an expert or advanced Ainu speaker to review them.
## Validation Metrics
- Loss: 1.216
- SacreBLEU: 29.910
- Gen len: 10.022
| null |
Non_BioNLP
|
# シサム語による説明
アイヌ語と日本語の双方向機械翻訳モデルです。
民話や叙事詩のコーパスばかり用いたので、それらに出てきそうな単語ばかり使っているなら、翻訳できます。
何かあれば、so-miyagawa at ninjal.ac.jpまでご連絡ください。
まあまあ良いSacreBLEUスコアです。chrFスコアなども計ってみたいです。詳しい人教えてください。
論文はこちらです。
So Miyagawa. 2023. Machine Translation for Highly Low-Resource Language: A Case Study of Ainu, a Critically Endangered Indigenous Language in Northern Japan. In Proceedings of the Joint 3rd International Conference on Natural Language Processing for Digital Humanities and 8th International Workshop on Computational Linguistics for Uralic Languages, pages 120–124, Tokyo, Japan. Association for Computational Linguistics.
https://aclanthology.org/2023.nlp4dh-1.14/
右のInference APIで日本語やアイヌ語を入力して試してみてください。もちろん、完璧ではないので、初・中級者の方は結果はそのまま使わず、必ず専門家やアイヌ語上級者に見てもらってください。
# 英語による説明
This is a two-way machine translation model between Ainu and Japanese.
We used only a corpus of folk tales and epic poems, so if you use only words that might appear in those, you can translate.
Please contact me at so-miyagawa at ninjal.ac.jp if you have any questions.
I have a so-so SacreBLEU score and would like to measure chrF score etc. Please let me know if you know more.
Here is the paper.
Miyagawa, So. 2023. Machine Translation for Highly Low-Resource Language: A Case Study of Ainu, a Critically Endangered Indigenous Language in Northern Japan. In Proceedings of the Joint 3rd International Conference on Natural Language Processing for Digital Humanities and 8th International Workshop In Proceedings of the Joint 3rd International Conference on Natural Language Processing for Digital Humanities and 8th International Workshop on Computational Linguistics for Uralic Languages, pages 120-124, Tokyo, Japan.
https://aclanthology.org/2023.nlp4dh-1.14/
You can try using the Inference API on the right by entering Japanese or Ainu. Of course, it is not perfect, so if you are a beginner or intermediate user, please do not use the results as they are, and be sure to ask an expert or advanced Ainu speaker to review them.
## Validation Metrics
- Loss: 1.216
- SacreBLEU: 29.910
- Gen len: 10.022
|
{"datasets": ["SoMiyagawa/autotrain-data-ainutrans2"], "language": ["ja", "ain"], "tags": ["translation"], "co2_eq_emissions": {"emissions": 1086.6023091382667}}
|
task
|
[
"TRANSLATION"
] | 43,547 |
ganeshkamath89/bert-fine-tuned-cola
|
ganeshkamath89
| null |
[
"generated_from_trainer",
"dataset:nyu-mll/glue",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-04-10T12:16:20Z |
2024-04-10T12:42:16+00:00
| 0 | 0 |
---
datasets:
- nyu-mll/glue
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: bert-fine-tuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.5959186748524787
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8068
- Matthews Correlation: 0.5959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4838 | 1.0 | 1069 | 0.5996 | 0.4637 |
| 0.3543 | 2.0 | 2138 | 0.6670 | 0.5778 |
| 0.1948 | 3.0 | 3207 | 0.8068 | 0.5959 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.13.4.dev0
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8068
- Matthews Correlation: 0.5959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4838 | 1.0 | 1069 | 0.5996 | 0.4637 |
| 0.3543 | 2.0 | 2138 | 0.6670 | 0.5778 |
| 0.1948 | 3.0 | 3207 | 0.8068 | 0.5959 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.13.4.dev0
- Tokenizers 0.10.3
|
{"datasets": ["nyu-mll/glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-fine-tuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5959186748524787, "name": "Matthews Correlation"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,548 |
LoneStriker/opus-v1.2-7b-3.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"axolotl",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-02-22T18:25:20Z |
2024-02-22T18:26:42+00:00
| 10 | 0 |
---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
---
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-7b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Prompting
[Read the full Opus V1 prompting guide](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can readily copy.
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
<img src="/dreamgen/opus-v1.2-7b/resolve/main/images/story_writing.webp" alt="story writing" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
Here's how you can prompt the model for the following tasks
- Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing):
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
- [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
- [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
- [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
- [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
- And more...
### Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of the prompting (see above).
### Running Locally
- [Chat template from model config](tokenizer_config.json#L51)
- This uses "text" role instead of the typical "assistant" role, and it does not (can’t?) support names
- [LM Studio config](configs/lmstudio.json)
- This uses "text" role role as well
### Running on DreamGen.com (free)
You can try the model for free on [dreamgen.com](https://dreamgen.com) — note that an account is required.
## Community
Join the DreamGen community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
## License
- This model is intended for personal use only, other use is not permitted.
| null |
Non_BioNLP
|
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-7b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Prompting
[Read the full Opus V1 prompting guide](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can readily copy.
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
<img src="/dreamgen/opus-v1.2-7b/resolve/main/images/story_writing.webp" alt="story writing" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
Here's how you can prompt the model for the following tasks
- Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing):
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
- [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
- [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
- [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
- [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
- And more...
### Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of the prompting (see above).
### Running Locally
- [Chat template from model config](tokenizer_config.json#L51)
- This uses "text" role instead of the typical "assistant" role, and it does not (can’t?) support names
- [LM Studio config](configs/lmstudio.json)
- This uses "text" role role as well
### Running on DreamGen.com (free)
You can try the model for free on [dreamgen.com](https://dreamgen.com) — note that an account is required.
## Community
Join the DreamGen community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
## License
- This model is intended for personal use only, other use is not permitted.
|
{"language": ["en"], "pipeline_tag": "text-generation", "tags": ["unsloth", "axolotl"]}
|
task
|
[
"SUMMARIZATION"
] | 43,549 |
hoang1123/llama3.1-8b-instruct-sum-trans
|
hoang1123
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-30T10:51:07Z |
2024-07-30T16:08:15+00:00
| 4 | 0 |
---
language:
- en
- vi
license: apache-2.0
---
---
### Llama3.1-8b-Instruct-Finetuned
#### Datasets
- Focus on two tasks (summarization & translation)
- Summarization (test dataset)[https://huggingface.co/datasets/hoang1123/cnndm]
- Translation (vi-en) (test dataset)[https://huggingface.co/datasets/hoang1123/mt-vi-en]
#### Training
- Unsloth
- Qlora
#### Inference
##### By Unsloth
```python
model_id = "hoang1123/llama3.1-8b-instruct-sum-trans"
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = torch.float16
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = model_id,
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
system_prompt = "You are a helpful AI assistant. Please answer the following question as best you can. If you don't know the answer, just say that you don't know, don't try to make up an answer."
dialogue = """Donald John Trump (born June 14, 1946) is an American politician, media personality, and businessman who served as the 45th president of the United States from 2017 to 2021.
Trump received a Bachelor of Science in economics from the University of Pennsylvania in 1968. His father named him president of his real estate business in 1971. Trump renamed it the Trump Organization and reoriented the company toward building and renovating skyscrapers, hotels, casinos, and golf courses. After a series of business failures in the late 1990s, he launched successful side ventures, mostly licensing the Trump name. From 2004 to 2015, he co-produced and hosted the reality television series The Apprentice. He and his businesses have been plaintiffs or defendants in more than 4,000 legal actions, including six business bankruptcies.
Trump won the 2016 presidential election as the Republican Party nominee against Democratic Party nominee Hillary Clinton while losing the popular vote.[a] A special counsel investigation established that Russia had interfered in the election to favor Trump. During the campaign, his political positions were described as populist, protectionist, isolationist, and nationalist. His election and policies sparked numerous protests. He was the only U.S. president without prior military or government experience. Trump promoted conspiracy theories and made many false and misleading statements during his campaigns and presidency, to a degree unprecedented in American politics. Many of his comments and actions have been characterized as racially charged, racist, and misogynistic.
As president, Trump ordered a travel ban on citizens from several Muslim-majority countries, diverted military funding toward building a wall on the U.S.–Mexico border, and implemented a family separation policy. He rolled back more than 100 environmental policies and regulations. He signed the Tax Cuts and Jobs Act of 2017, which cut taxes and eliminated the individual health insurance mandate penalty of the Affordable Care Act. He appointed Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett to the U.S. Supreme Court. He reacted slowly to the COVID-19 pandemic, ignored or contradicted many recommendations from health officials, used political pressure to interfere with testing efforts, and spread misinformation about unproven treatments. Trump initiated a trade war with China and withdrew the U.S. from the proposed Trans-Pacific Partnership trade agreement, the Paris Agreement on climate change, and the Iran nuclear deal. He met with North Korean leader Kim Jong Un three times but made no progress on denuclearization.
Trump is the only U.S. president to have been impeached twice, in 2019 for abuse of power and obstruction of Congress after he pressured Ukraine to investigate Joe Biden, and in 2021 for incitement of insurrection. The Senate acquitted him in both cases. Trump refused to concede after he lost the 2020 presidential election to Biden, falsely claiming widespread electoral fraud, and attempted to overturn the results. On January 6, 2021, he urged his supporters to march to the U.S. Capitol, which many of them attacked. Scholars and historians rank Trump as one of the worst presidents in American history.
Since leaving office, Trump has continued to dominate the Republican Party and is their candidate again in the 2024 presidential election. In May 2024, a jury in New York found Trump guilty on 34 felony counts of falsifying business records related to a hush-money payment to Stormy Daniels in an attempt to influence the 2016 election, making him the first former U.S. president to be convicted of a crime. He has been indicted in three other jurisdictions on 54 other felony counts related to his mishandling of classified documents and efforts to overturn the 2020 presidential election. In civil proceedings, Trump was found liable for sexual abuse and defamation in 2023, defamation in 2024, and for financial fraud in 2024. In July 2024, he survived an assassination attempt at a campaign rally in Pennsylvania."""
messages = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": f"# Please summary the following dialogue in three sentences:\n{dialogue}",
},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
terminators = [128001, 128008, 128009]
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
##### By Huggingface
| null |
Non_BioNLP
|
---
### Llama3.1-8b-Instruct-Finetuned
#### Datasets
- Focus on two tasks (summarization & translation)
- Summarization (test dataset)[https://huggingface.co/datasets/hoang1123/cnndm]
- Translation (vi-en) (test dataset)[https://huggingface.co/datasets/hoang1123/mt-vi-en]
#### Training
- Unsloth
- Qlora
#### Inference
##### By Unsloth
```python
model_id = "hoang1123/llama3.1-8b-instruct-sum-trans"
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = torch.float16
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = model_id,
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
system_prompt = "You are a helpful AI assistant. Please answer the following question as best you can. If you don't know the answer, just say that you don't know, don't try to make up an answer."
dialogue = """Donald John Trump (born June 14, 1946) is an American politician, media personality, and businessman who served as the 45th president of the United States from 2017 to 2021.
Trump received a Bachelor of Science in economics from the University of Pennsylvania in 1968. His father named him president of his real estate business in 1971. Trump renamed it the Trump Organization and reoriented the company toward building and renovating skyscrapers, hotels, casinos, and golf courses. After a series of business failures in the late 1990s, he launched successful side ventures, mostly licensing the Trump name. From 2004 to 2015, he co-produced and hosted the reality television series The Apprentice. He and his businesses have been plaintiffs or defendants in more than 4,000 legal actions, including six business bankruptcies.
Trump won the 2016 presidential election as the Republican Party nominee against Democratic Party nominee Hillary Clinton while losing the popular vote.[a] A special counsel investigation established that Russia had interfered in the election to favor Trump. During the campaign, his political positions were described as populist, protectionist, isolationist, and nationalist. His election and policies sparked numerous protests. He was the only U.S. president without prior military or government experience. Trump promoted conspiracy theories and made many false and misleading statements during his campaigns and presidency, to a degree unprecedented in American politics. Many of his comments and actions have been characterized as racially charged, racist, and misogynistic.
As president, Trump ordered a travel ban on citizens from several Muslim-majority countries, diverted military funding toward building a wall on the U.S.–Mexico border, and implemented a family separation policy. He rolled back more than 100 environmental policies and regulations. He signed the Tax Cuts and Jobs Act of 2017, which cut taxes and eliminated the individual health insurance mandate penalty of the Affordable Care Act. He appointed Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett to the U.S. Supreme Court. He reacted slowly to the COVID-19 pandemic, ignored or contradicted many recommendations from health officials, used political pressure to interfere with testing efforts, and spread misinformation about unproven treatments. Trump initiated a trade war with China and withdrew the U.S. from the proposed Trans-Pacific Partnership trade agreement, the Paris Agreement on climate change, and the Iran nuclear deal. He met with North Korean leader Kim Jong Un three times but made no progress on denuclearization.
Trump is the only U.S. president to have been impeached twice, in 2019 for abuse of power and obstruction of Congress after he pressured Ukraine to investigate Joe Biden, and in 2021 for incitement of insurrection. The Senate acquitted him in both cases. Trump refused to concede after he lost the 2020 presidential election to Biden, falsely claiming widespread electoral fraud, and attempted to overturn the results. On January 6, 2021, he urged his supporters to march to the U.S. Capitol, which many of them attacked. Scholars and historians rank Trump as one of the worst presidents in American history.
Since leaving office, Trump has continued to dominate the Republican Party and is their candidate again in the 2024 presidential election. In May 2024, a jury in New York found Trump guilty on 34 felony counts of falsifying business records related to a hush-money payment to Stormy Daniels in an attempt to influence the 2016 election, making him the first former U.S. president to be convicted of a crime. He has been indicted in three other jurisdictions on 54 other felony counts related to his mishandling of classified documents and efforts to overturn the 2020 presidential election. In civil proceedings, Trump was found liable for sexual abuse and defamation in 2023, defamation in 2024, and for financial fraud in 2024. In July 2024, he survived an assassination attempt at a campaign rally in Pennsylvania."""
messages = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": f"# Please summary the following dialogue in three sentences:\n{dialogue}",
},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
terminators = [128001, 128008, 128009]
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
##### By Huggingface
|
{"language": ["en", "vi"], "license": "apache-2.0"}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 43,550 |
neuropark/sahajBERT
|
neuropark
|
fill-mask
|
[
"transformers",
"pytorch",
"albert",
"pretraining",
"collaborative",
"bengali",
"bangla",
"fill-mask",
"bn",
"dataset:Wikipedia",
"dataset:Oscar",
"arxiv:1909.11942",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-06-24T16:49:26+00:00
| 179 | 12 |
---
datasets:
- Wikipedia
- Oscar
language: bn
license: apache-2.0
pipeline_tag: fill-mask
tags:
- collaborative
- bengali
- albert
- bangla
widget:
- text: জীবনে সবচেয়ে মূল্যবান জিনিস হচ্ছে [MASK]।
---
# sahajBERT
<iframe width="100%" height="1100" frameborder="0"
src="https://observablehq.com/embed/@huggingface/participants-bubbles-chart?cells=c_noaws%2Ct_noaws%2Cviewof+currentDate"></iframe>
Collaboratively pre-trained model on Bengali language using masked language modeling (MLM) and Sentence Order Prediction (SOP) objectives.
## Model description
<!-- You can embed local or remote images using `` -->
sahajBERT is a model composed of 1) a tokenizer specially designed for Bengali and 2) an [ALBERT](https://arxiv.org/abs/1909.11942) architecture collaboratively pre-trained on a dump of Wikipedia in Bengali and the Bengali part of OSCAR.
<!-- Add more information about the collaborative training when we have time / preprint available -->
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering.
We trained our model on 2 of these downstream tasks: [sequence classification](https://huggingface.co/neuropark/sahajBERT-NCC) and [token classification](https://huggingface.co/neuropark/sahajBERT-NER)
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import AlbertForMaskedLM, FillMaskPipeline, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT")
# Initialize model
model = AlbertForMaskedLM.from_pretrained("neuropark/sahajBERT")
# Initialize pipeline
pipeline = FillMaskPipeline(tokenizer=tokenizer, model=model)
raw_text = "ধন্যবাদ। আপনার সাথে কথা [MASK] ভালো লাগলো" # Change me
pipeline(raw_text)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertModel, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT")
# Initialize model
model = AlbertModel.from_pretrained("neuropark/sahajBERT")
text = "ধন্যবাদ। আপনার সাথে কথা বলে ভালো লাগলো" # Change me
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
#### Limitations and bias
<!-- Provide examples of latent issues and potential remediations. -->
WIP
## Training data
The tokenizer was trained on he Bengali part of OSCAR and the model on a [dump of Wikipedia in Bengali](https://huggingface.co/datasets/lhoestq/wikipedia_bn) and the Bengali part of [OSCAR](https://huggingface.co/datasets/oscar).
## Training procedure
This model was trained in a collaborative manner by volunteer participants.
<!-- Add more information about the collaborative training when we have time / preprint available + Preprocessing, hardware used, hyperparameters... (maybe use figures)-->
### Contributors leaderboard
| Rank | Username | Total contributed runtime |
|:-------------:|:-------------:|-------------:|
| 1|[khalidsaifullaah](https://huggingface.co/khalidsaifullaah)|11 days 21:02:08|
| 2|[ishanbagchi](https://huggingface.co/ishanbagchi)|9 days 20:37:00|
| 3|[tanmoyio](https://huggingface.co/tanmoyio)|9 days 18:08:34|
| 4|[debajit](https://huggingface.co/debajit)|8 days 14:15:10|
| 5|[skylord](https://huggingface.co/skylord)|6 days 16:35:29|
| 6|[ibraheemmoosa](https://huggingface.co/ibraheemmoosa)|5 days 01:05:57|
| 7|[SaulLu](https://huggingface.co/SaulLu)|5 days 00:46:36|
| 8|[lhoestq](https://huggingface.co/lhoestq)|4 days 20:11:16|
| 9|[nilavya](https://huggingface.co/nilavya)|4 days 08:51:51|
|10|[Priyadarshan](https://huggingface.co/Priyadarshan)|4 days 02:28:55|
|11|[anuragshas](https://huggingface.co/anuragshas)|3 days 05:00:55|
|12|[sujitpal](https://huggingface.co/sujitpal)|2 days 20:52:33|
|13|[manandey](https://huggingface.co/manandey)|2 days 16:17:13|
|14|[albertvillanova](https://huggingface.co/albertvillanova)|2 days 14:14:31|
|15|[justheuristic](https://huggingface.co/justheuristic)|2 days 13:20:52|
|16|[w0lfw1tz](https://huggingface.co/w0lfw1tz)|2 days 07:22:48|
|17|[smoker](https://huggingface.co/smoker)|2 days 02:52:03|
|18|[Soumi](https://huggingface.co/Soumi)|1 days 20:42:02|
|19|[Anjali](https://huggingface.co/Anjali)|1 days 16:28:00|
|20|[OptimusPrime](https://huggingface.co/OptimusPrime)|1 days 09:16:57|
|21|[theainerd](https://huggingface.co/theainerd)|1 days 04:48:57|
|22|[yhn112](https://huggingface.co/yhn112)|0 days 20:57:02|
|23|[kolk](https://huggingface.co/kolk)|0 days 17:57:37|
|24|[arnab](https://huggingface.co/arnab)|0 days 17:54:12|
|25|[imavijit](https://huggingface.co/imavijit)|0 days 16:07:26|
|26|[osanseviero](https://huggingface.co/osanseviero)|0 days 14:16:45|
|27|[subhranilsarkar](https://huggingface.co/subhranilsarkar)|0 days 13:04:46|
|28|[sagnik1511](https://huggingface.co/sagnik1511)|0 days 12:24:57|
|29|[anindabitm](https://huggingface.co/anindabitm)|0 days 08:56:44|
|30|[borzunov](https://huggingface.co/borzunov)|0 days 04:07:35|
|31|[thomwolf](https://huggingface.co/thomwolf)|0 days 03:53:15|
|32|[priyadarshan](https://huggingface.co/priyadarshan)|0 days 03:40:11|
|33|[ali007](https://huggingface.co/ali007)|0 days 03:34:37|
|34|[sbrandeis](https://huggingface.co/sbrandeis)|0 days 03:18:16|
|35|[Preetha](https://huggingface.co/Preetha)|0 days 03:13:47|
|36|[Mrinal](https://huggingface.co/Mrinal)|0 days 03:01:43|
|37|[laxya007](https://huggingface.co/laxya007)|0 days 02:18:34|
|38|[lewtun](https://huggingface.co/lewtun)|0 days 00:34:43|
|39|[Rounak](https://huggingface.co/Rounak)|0 days 00:26:10|
|40|[kshmax](https://huggingface.co/kshmax)|0 days 00:06:38|
### Hardware used
<iframe width="100%" height="251" frameborder="0"
src="https://observablehq.com/embed/@huggingface/sahajbert-hardware?cells=c1_noaws"></iframe>
## Eval results
We evaluate sahajBERT model quality and 2 other model benchmarks ([XLM-R-large](https://huggingface.co/xlm-roberta-large) and [IndicBert](https://huggingface.co/ai4bharat/indic-bert)) by fine-tuning 3 times their pre-trained models on two downstream tasks in Bengali:
- **NER**: a named entity recognition on Bengali split of [WikiANN](https://huggingface.co/datasets/wikiann) dataset
- **NCC**: a multi-class classification task on news Soham News Category Classification dataset from IndicGLUE
| Base pre-trained Model | NER - F1 (mean ± std) | NCC - Accuracy (mean ± std) |
|:-------------:|:-------------:|:-------------:|
|sahajBERT | 95.45 ± 0.53| 91.97 ± 0.47|
|[XLM-R-large](https://huggingface.co/xlm-roberta-large) | 96.48 ± 0.22| 90.05 ± 0.38|
|[IndicBert](https://huggingface.co/ai4bharat/indic-bert) | 92.52 ± 0.45| 74.46 ± 1.91|
### BibTeX entry and citation info
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
| null |
Non_BioNLP
|
# sahajBERT
<iframe width="100%" height="1100" frameborder="0"
src="https://observablehq.com/embed/@huggingface/participants-bubbles-chart?cells=c_noaws%2Ct_noaws%2Cviewof+currentDate"></iframe>
Collaboratively pre-trained model on Bengali language using masked language modeling (MLM) and Sentence Order Prediction (SOP) objectives.
## Model description
<!-- You can embed local or remote images using `` -->
sahajBERT is a model composed of 1) a tokenizer specially designed for Bengali and 2) an [ALBERT](https://arxiv.org/abs/1909.11942) architecture collaboratively pre-trained on a dump of Wikipedia in Bengali and the Bengali part of OSCAR.
<!-- Add more information about the collaborative training when we have time / preprint available -->
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering.
We trained our model on 2 of these downstream tasks: [sequence classification](https://huggingface.co/neuropark/sahajBERT-NCC) and [token classification](https://huggingface.co/neuropark/sahajBERT-NER)
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import AlbertForMaskedLM, FillMaskPipeline, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT")
# Initialize model
model = AlbertForMaskedLM.from_pretrained("neuropark/sahajBERT")
# Initialize pipeline
pipeline = FillMaskPipeline(tokenizer=tokenizer, model=model)
raw_text = "ধন্যবাদ। আপনার সাথে কথা [MASK] ভালো লাগলো" # Change me
pipeline(raw_text)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertModel, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT")
# Initialize model
model = AlbertModel.from_pretrained("neuropark/sahajBERT")
text = "ধন্যবাদ। আপনার সাথে কথা বলে ভালো লাগলো" # Change me
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
#### Limitations and bias
<!-- Provide examples of latent issues and potential remediations. -->
WIP
## Training data
The tokenizer was trained on he Bengali part of OSCAR and the model on a [dump of Wikipedia in Bengali](https://huggingface.co/datasets/lhoestq/wikipedia_bn) and the Bengali part of [OSCAR](https://huggingface.co/datasets/oscar).
## Training procedure
This model was trained in a collaborative manner by volunteer participants.
<!-- Add more information about the collaborative training when we have time / preprint available + Preprocessing, hardware used, hyperparameters... (maybe use figures)-->
### Contributors leaderboard
| Rank | Username | Total contributed runtime |
|:-------------:|:-------------:|-------------:|
| 1|[khalidsaifullaah](https://huggingface.co/khalidsaifullaah)|11 days 21:02:08|
| 2|[ishanbagchi](https://huggingface.co/ishanbagchi)|9 days 20:37:00|
| 3|[tanmoyio](https://huggingface.co/tanmoyio)|9 days 18:08:34|
| 4|[debajit](https://huggingface.co/debajit)|8 days 14:15:10|
| 5|[skylord](https://huggingface.co/skylord)|6 days 16:35:29|
| 6|[ibraheemmoosa](https://huggingface.co/ibraheemmoosa)|5 days 01:05:57|
| 7|[SaulLu](https://huggingface.co/SaulLu)|5 days 00:46:36|
| 8|[lhoestq](https://huggingface.co/lhoestq)|4 days 20:11:16|
| 9|[nilavya](https://huggingface.co/nilavya)|4 days 08:51:51|
|10|[Priyadarshan](https://huggingface.co/Priyadarshan)|4 days 02:28:55|
|11|[anuragshas](https://huggingface.co/anuragshas)|3 days 05:00:55|
|12|[sujitpal](https://huggingface.co/sujitpal)|2 days 20:52:33|
|13|[manandey](https://huggingface.co/manandey)|2 days 16:17:13|
|14|[albertvillanova](https://huggingface.co/albertvillanova)|2 days 14:14:31|
|15|[justheuristic](https://huggingface.co/justheuristic)|2 days 13:20:52|
|16|[w0lfw1tz](https://huggingface.co/w0lfw1tz)|2 days 07:22:48|
|17|[smoker](https://huggingface.co/smoker)|2 days 02:52:03|
|18|[Soumi](https://huggingface.co/Soumi)|1 days 20:42:02|
|19|[Anjali](https://huggingface.co/Anjali)|1 days 16:28:00|
|20|[OptimusPrime](https://huggingface.co/OptimusPrime)|1 days 09:16:57|
|21|[theainerd](https://huggingface.co/theainerd)|1 days 04:48:57|
|22|[yhn112](https://huggingface.co/yhn112)|0 days 20:57:02|
|23|[kolk](https://huggingface.co/kolk)|0 days 17:57:37|
|24|[arnab](https://huggingface.co/arnab)|0 days 17:54:12|
|25|[imavijit](https://huggingface.co/imavijit)|0 days 16:07:26|
|26|[osanseviero](https://huggingface.co/osanseviero)|0 days 14:16:45|
|27|[subhranilsarkar](https://huggingface.co/subhranilsarkar)|0 days 13:04:46|
|28|[sagnik1511](https://huggingface.co/sagnik1511)|0 days 12:24:57|
|29|[anindabitm](https://huggingface.co/anindabitm)|0 days 08:56:44|
|30|[borzunov](https://huggingface.co/borzunov)|0 days 04:07:35|
|31|[thomwolf](https://huggingface.co/thomwolf)|0 days 03:53:15|
|32|[priyadarshan](https://huggingface.co/priyadarshan)|0 days 03:40:11|
|33|[ali007](https://huggingface.co/ali007)|0 days 03:34:37|
|34|[sbrandeis](https://huggingface.co/sbrandeis)|0 days 03:18:16|
|35|[Preetha](https://huggingface.co/Preetha)|0 days 03:13:47|
|36|[Mrinal](https://huggingface.co/Mrinal)|0 days 03:01:43|
|37|[laxya007](https://huggingface.co/laxya007)|0 days 02:18:34|
|38|[lewtun](https://huggingface.co/lewtun)|0 days 00:34:43|
|39|[Rounak](https://huggingface.co/Rounak)|0 days 00:26:10|
|40|[kshmax](https://huggingface.co/kshmax)|0 days 00:06:38|
### Hardware used
<iframe width="100%" height="251" frameborder="0"
src="https://observablehq.com/embed/@huggingface/sahajbert-hardware?cells=c1_noaws"></iframe>
## Eval results
We evaluate sahajBERT model quality and 2 other model benchmarks ([XLM-R-large](https://huggingface.co/xlm-roberta-large) and [IndicBert](https://huggingface.co/ai4bharat/indic-bert)) by fine-tuning 3 times their pre-trained models on two downstream tasks in Bengali:
- **NER**: a named entity recognition on Bengali split of [WikiANN](https://huggingface.co/datasets/wikiann) dataset
- **NCC**: a multi-class classification task on news Soham News Category Classification dataset from IndicGLUE
| Base pre-trained Model | NER - F1 (mean ± std) | NCC - Accuracy (mean ± std) |
|:-------------:|:-------------:|:-------------:|
|sahajBERT | 95.45 ± 0.53| 91.97 ± 0.47|
|[XLM-R-large](https://huggingface.co/xlm-roberta-large) | 96.48 ± 0.22| 90.05 ± 0.38|
|[IndicBert](https://huggingface.co/ai4bharat/indic-bert) | 92.52 ± 0.45| 74.46 ± 1.91|
### BibTeX entry and citation info
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
|
{"datasets": ["Wikipedia", "Oscar"], "language": "bn", "license": "apache-2.0", "pipeline_tag": "fill-mask", "tags": ["collaborative", "bengali", "albert", "bangla"], "widget": [{"text": "জীবনে সবচেয়ে মূল্যবান জিনিস হচ্ছে [MASK]।"}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION",
"QUESTION_ANSWERING"
] | 43,551 |
ymoslem/whisper-small-ga2en-v1.6-r
|
ymoslem
|
automatic-speech-recognition
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"en",
"dataset:ymoslem/IWSLT2023-GA-EN",
"dataset:ymoslem/FLEURS-GA-EN",
"dataset:ymoslem/BitesizeIrish-GA-EN",
"dataset:ymoslem/SpokenWords-GA-EN-MTed",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2024-06-02T18:20:07Z |
2024-06-03T02:27:56+00:00
| 24 | 1 |
---
base_model: openai/whisper-small
datasets:
- ymoslem/IWSLT2023-GA-EN
- ymoslem/FLEURS-GA-EN
- ymoslem/BitesizeIrish-GA-EN
- ymoslem/SpokenWords-GA-EN-MTed
language:
- ga
- en
license: apache-2.0
metrics:
- bleu
- wer
tags:
- generated_from_trainer
model-index:
- name: Whisper Small GA-EN Speech Translation + VAD + warmup_ratio=0.01
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: IWSLT-2023, FLEURS, BiteSize, and SpokenWords
type: ymoslem/IWSLT2023-GA-EN
metrics:
- type: bleu
value: 29.94
name: Bleu
- type: wer
value: 64.34038721296713
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small GA-EN Speech Translation + VAD + warmup_ratio=0.01
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IWSLT-2023, FLEURS, BiteSize, and SpokenWords dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7482
- Bleu: 29.94
- Chrf: 45.74
- Wer: 64.3404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer |
|:-------------:|:------:|:----:|:-----:|:-----:|:---------------:|:--------:|
| 2.0518 | 0.2188 | 100 | 8.56 | 25.29 | 1.8072 | 123.9982 |
| 1.5449 | 0.4376 | 200 | 18.41 | 34.82 | 1.5746 | 83.7461 |
| 1.2518 | 0.6565 | 300 | 21.1 | 36.24 | 1.5009 | 83.9712 |
| 1.0947 | 0.8753 | 400 | 21.5 | 41.43 | 1.4582 | 89.8694 |
| 0.4439 | 1.0941 | 500 | 25.21 | 41.77 | 1.4979 | 72.5799 |
| 0.4416 | 1.3129 | 600 | 22.2 | 40.47 | 1.5107 | 79.8739 |
| 0.4417 | 1.5317 | 700 | 20.2 | 40.75 | 1.5215 | 88.8789 |
| 0.4108 | 1.7505 | 800 | 25.73 | 41.28 | 1.5278 | 67.8073 |
| 0.355 | 1.9694 | 900 | 20.6 | 39.37 | 1.5436 | 87.3030 |
| 0.1303 | 2.1882 | 1000 | 28.79 | 42.68 | 1.5936 | 68.1675 |
| 0.1421 | 2.4070 | 1100 | 27.84 | 42.58 | 1.5745 | 67.5371 |
| 0.1341 | 2.6258 | 1200 | 30.52 | 45.15 | 1.5953 | 66.5916 |
| 0.1365 | 2.8446 | 1300 | 26.93 | 43.72 | 1.6046 | 74.2909 |
| 0.0528 | 3.0635 | 1400 | 29.03 | 44.12 | 1.6303 | 64.8807 |
| 0.0519 | 3.2823 | 1500 | 27.75 | 44.34 | 1.6774 | 68.6177 |
| 0.0554 | 3.5011 | 1600 | 27.64 | 45.15 | 1.6637 | 71.1842 |
| 0.0514 | 3.7199 | 1700 | 30.26 | 44.62 | 1.6497 | 65.4660 |
| 0.0503 | 3.9387 | 1800 | 26.88 | 43.0 | 1.6780 | 70.4187 |
| 0.0259 | 4.1575 | 1900 | 29.6 | 44.51 | 1.6915 | 64.9707 |
| 0.0263 | 4.3764 | 2000 | 25.33 | 42.51 | 1.7080 | 72.3998 |
| 0.0254 | 4.5952 | 2100 | 30.59 | 45.35 | 1.6884 | 64.2954 |
| 0.0211 | 4.8140 | 2200 | 31.09 | 46.56 | 1.6984 | 64.0252 |
| 0.0137 | 5.0328 | 2300 | 28.96 | 43.67 | 1.7253 | 66.3665 |
| 0.0075 | 5.2516 | 2400 | 29.77 | 44.63 | 1.7112 | 66.9968 |
| 0.0056 | 5.4705 | 2500 | 29.96 | 45.51 | 1.7197 | 64.5655 |
| 0.0067 | 5.6893 | 2600 | 29.86 | 45.25 | 1.7464 | 66.0964 |
| 0.0064 | 5.9081 | 2700 | 29.47 | 45.36 | 1.7440 | 65.2859 |
| 0.0023 | 6.1269 | 2800 | 30.03 | 46.49 | 1.7419 | 64.4755 |
| 0.0016 | 6.3457 | 2900 | 29.76 | 45.64 | 1.7474 | 65.0158 |
| 0.0019 | 6.5646 | 3000 | 1.7482| 29.94 | 45.74 | 64.3404 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small GA-EN Speech Translation + VAD + warmup_ratio=0.01
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IWSLT-2023, FLEURS, BiteSize, and SpokenWords dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7482
- Bleu: 29.94
- Chrf: 45.74
- Wer: 64.3404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer |
|:-------------:|:------:|:----:|:-----:|:-----:|:---------------:|:--------:|
| 2.0518 | 0.2188 | 100 | 8.56 | 25.29 | 1.8072 | 123.9982 |
| 1.5449 | 0.4376 | 200 | 18.41 | 34.82 | 1.5746 | 83.7461 |
| 1.2518 | 0.6565 | 300 | 21.1 | 36.24 | 1.5009 | 83.9712 |
| 1.0947 | 0.8753 | 400 | 21.5 | 41.43 | 1.4582 | 89.8694 |
| 0.4439 | 1.0941 | 500 | 25.21 | 41.77 | 1.4979 | 72.5799 |
| 0.4416 | 1.3129 | 600 | 22.2 | 40.47 | 1.5107 | 79.8739 |
| 0.4417 | 1.5317 | 700 | 20.2 | 40.75 | 1.5215 | 88.8789 |
| 0.4108 | 1.7505 | 800 | 25.73 | 41.28 | 1.5278 | 67.8073 |
| 0.355 | 1.9694 | 900 | 20.6 | 39.37 | 1.5436 | 87.3030 |
| 0.1303 | 2.1882 | 1000 | 28.79 | 42.68 | 1.5936 | 68.1675 |
| 0.1421 | 2.4070 | 1100 | 27.84 | 42.58 | 1.5745 | 67.5371 |
| 0.1341 | 2.6258 | 1200 | 30.52 | 45.15 | 1.5953 | 66.5916 |
| 0.1365 | 2.8446 | 1300 | 26.93 | 43.72 | 1.6046 | 74.2909 |
| 0.0528 | 3.0635 | 1400 | 29.03 | 44.12 | 1.6303 | 64.8807 |
| 0.0519 | 3.2823 | 1500 | 27.75 | 44.34 | 1.6774 | 68.6177 |
| 0.0554 | 3.5011 | 1600 | 27.64 | 45.15 | 1.6637 | 71.1842 |
| 0.0514 | 3.7199 | 1700 | 30.26 | 44.62 | 1.6497 | 65.4660 |
| 0.0503 | 3.9387 | 1800 | 26.88 | 43.0 | 1.6780 | 70.4187 |
| 0.0259 | 4.1575 | 1900 | 29.6 | 44.51 | 1.6915 | 64.9707 |
| 0.0263 | 4.3764 | 2000 | 25.33 | 42.51 | 1.7080 | 72.3998 |
| 0.0254 | 4.5952 | 2100 | 30.59 | 45.35 | 1.6884 | 64.2954 |
| 0.0211 | 4.8140 | 2200 | 31.09 | 46.56 | 1.6984 | 64.0252 |
| 0.0137 | 5.0328 | 2300 | 28.96 | 43.67 | 1.7253 | 66.3665 |
| 0.0075 | 5.2516 | 2400 | 29.77 | 44.63 | 1.7112 | 66.9968 |
| 0.0056 | 5.4705 | 2500 | 29.96 | 45.51 | 1.7197 | 64.5655 |
| 0.0067 | 5.6893 | 2600 | 29.86 | 45.25 | 1.7464 | 66.0964 |
| 0.0064 | 5.9081 | 2700 | 29.47 | 45.36 | 1.7440 | 65.2859 |
| 0.0023 | 6.1269 | 2800 | 30.03 | 46.49 | 1.7419 | 64.4755 |
| 0.0016 | 6.3457 | 2900 | 29.76 | 45.64 | 1.7474 | 65.0158 |
| 0.0019 | 6.5646 | 3000 | 1.7482| 29.94 | 45.74 | 64.3404 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
{"base_model": "openai/whisper-small", "datasets": ["ymoslem/IWSLT2023-GA-EN", "ymoslem/FLEURS-GA-EN", "ymoslem/BitesizeIrish-GA-EN", "ymoslem/SpokenWords-GA-EN-MTed"], "language": ["ga", "en"], "license": "apache-2.0", "metrics": ["bleu", "wer"], "tags": ["generated_from_trainer"], "model-index": [{"name": "Whisper Small GA-EN Speech Translation + VAD + warmup_ratio=0.01", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "IWSLT-2023, FLEURS, BiteSize, and SpokenWords", "type": "ymoslem/IWSLT2023-GA-EN"}, "metrics": [{"type": "bleu", "value": 29.94, "name": "Bleu"}, {"type": "wer", "value": 64.34038721296713, "name": "Wer"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 43,552 |
WeightWatcher/albert-large-v2-qqp
|
WeightWatcher
|
text-classification
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"arxiv:1909.11942",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-02T21:11:17Z |
2023-08-03T06:14:45+00:00
| 14 | 0 |
---
datasets:
- glue
language:
- en
license: mit
metrics:
- F1 score
---
# Model Card for WeightWatcher/albert-large-v2-qqp
This model was finetuned on the GLUE/qqp task, based on the pretrained
albert-large-v2 model. Hyperparameters were (largely) taken from the following
publication, with some minor exceptions.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
## Model Details
### Model Description
- **Developed by:** https://huggingface.co/cdhinrichs
- **Model type:** Text Sequence Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** https://huggingface.co/albert-large-v2
## Uses
Text classification, research and development.
### Out-of-Scope Use
Not intended for production use.
See https://huggingface.co/albert-large-v2
## Bias, Risks, and Limitations
See https://huggingface.co/albert-large-v2
### Recommendations
See https://huggingface.co/albert-large-v2
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AlbertForSequenceClassification
model = AlbertForSequenceClassification.from_pretrained("WeightWatcher/albert-large-v2-qqp")
```
## Training Details
### Training Data
See https://huggingface.co/datasets/glue#qqp
QQP is a classification task, and a part of the GLUE benchmark.
### Training Procedure
Adam optimization was used on the pretrained ALBERT model at
https://huggingface.co/albert-large-v2.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
#### Training Hyperparameters
Training hyperparameters, (Learning Rate, Batch Size, ALBERT dropout rate,
Classifier Dropout Rate, Warmup Steps, Training Steps,) were taken from Table
A.4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
Max sequence length (MSL) was set to 128, differing from the above.
## Evaluation
F1 score is used to evaluate model performance.
### Testing Data, Factors & Metrics
#### Testing Data
See https://huggingface.co/datasets/glue#qqp
#### Metrics
F1 score
### Results
Training F1 score: 0.9555347548257284
Evaluation F1 score: 0.87304693979101
## Environmental Impact
The model was finetuned on a single user workstation with a single GPU. CO2
impact is expected to be minimal.
| null |
Non_BioNLP
|
# Model Card for WeightWatcher/albert-large-v2-qqp
This model was finetuned on the GLUE/qqp task, based on the pretrained
albert-large-v2 model. Hyperparameters were (largely) taken from the following
publication, with some minor exceptions.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
## Model Details
### Model Description
- **Developed by:** https://huggingface.co/cdhinrichs
- **Model type:** Text Sequence Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** https://huggingface.co/albert-large-v2
## Uses
Text classification, research and development.
### Out-of-Scope Use
Not intended for production use.
See https://huggingface.co/albert-large-v2
## Bias, Risks, and Limitations
See https://huggingface.co/albert-large-v2
### Recommendations
See https://huggingface.co/albert-large-v2
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AlbertForSequenceClassification
model = AlbertForSequenceClassification.from_pretrained("WeightWatcher/albert-large-v2-qqp")
```
## Training Details
### Training Data
See https://huggingface.co/datasets/glue#qqp
QQP is a classification task, and a part of the GLUE benchmark.
### Training Procedure
Adam optimization was used on the pretrained ALBERT model at
https://huggingface.co/albert-large-v2.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
#### Training Hyperparameters
Training hyperparameters, (Learning Rate, Batch Size, ALBERT dropout rate,
Classifier Dropout Rate, Warmup Steps, Training Steps,) were taken from Table
A.4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
Max sequence length (MSL) was set to 128, differing from the above.
## Evaluation
F1 score is used to evaluate model performance.
### Testing Data, Factors & Metrics
#### Testing Data
See https://huggingface.co/datasets/glue#qqp
#### Metrics
F1 score
### Results
Training F1 score: 0.9555347548257284
Evaluation F1 score: 0.87304693979101
## Environmental Impact
The model was finetuned on a single user workstation with a single GPU. CO2
impact is expected to be minimal.
|
{"datasets": ["glue"], "language": ["en"], "license": "mit", "metrics": ["F1 score"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,553 |
richtsai1103/finetuning-summarization-model
|
richtsai1103
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-22T02:50:23Z |
2023-04-28T07:31:12+00:00
| 8 | 0 |
---
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: finetuning-summarization-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-summarization-model
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3028
- Rouge1: 29.1184
- Rouge2: 21.1309
- Rougel: 28.3412
- Rougelsum: 28.4871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 6.821 | 1.0 | 450 | 1.7464 | 31.7328 | 21.1788 | 30.3949 | 30.5202 |
| 2.1307 | 2.0 | 900 | 1.4939 | 31.3208 | 22.0215 | 30.2589 | 30.3872 |
| 1.7915 | 3.0 | 1350 | 1.4322 | 28.7824 | 19.472 | 27.926 | 28.2177 |
| 1.6186 | 4.0 | 1800 | 1.3830 | 29.2568 | 20.6076 | 28.4825 | 28.6486 |
| 1.5148 | 5.0 | 2250 | 1.3504 | 29.308 | 21.0698 | 28.4755 | 28.6885 |
| 1.427 | 6.0 | 2700 | 1.3177 | 29.0294 | 20.706 | 28.271 | 28.3385 |
| 1.3793 | 7.0 | 3150 | 1.3172 | 28.9276 | 20.922 | 28.1795 | 28.3241 |
| 1.3536 | 8.0 | 3600 | 1.3028 | 29.1184 | 21.1309 | 28.3412 | 28.4871 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-summarization-model
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3028
- Rouge1: 29.1184
- Rouge2: 21.1309
- Rougel: 28.3412
- Rougelsum: 28.4871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 6.821 | 1.0 | 450 | 1.7464 | 31.7328 | 21.1788 | 30.3949 | 30.5202 |
| 2.1307 | 2.0 | 900 | 1.4939 | 31.3208 | 22.0215 | 30.2589 | 30.3872 |
| 1.7915 | 3.0 | 1350 | 1.4322 | 28.7824 | 19.472 | 27.926 | 28.2177 |
| 1.6186 | 4.0 | 1800 | 1.3830 | 29.2568 | 20.6076 | 28.4825 | 28.6486 |
| 1.5148 | 5.0 | 2250 | 1.3504 | 29.308 | 21.0698 | 28.4755 | 28.6885 |
| 1.427 | 6.0 | 2700 | 1.3177 | 29.0294 | 20.706 | 28.271 | 28.3385 |
| 1.3793 | 7.0 | 3150 | 1.3172 | 28.9276 | 20.922 | 28.1795 | 28.3241 |
| 1.3536 | 8.0 | 3600 | 1.3028 | 29.1184 | 21.1309 | 28.3412 | 28.4871 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
{"license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-summarization-model", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 43,554 |
fathyshalab/reklambox-oeffentlicher-verkehr-vermietung-setfit
|
fathyshalab
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-04-29T16:29:13Z |
2023-04-29T16:29:23+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# Linkthat/reklambox-oeffentlicher-verkehr-vermietung-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-oeffentlicher-verkehr-vermietung-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# Linkthat/reklambox-oeffentlicher-verkehr-vermietung-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-oeffentlicher-verkehr-vermietung-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,555 |
LoneStriker/Bagel-Hermes-2x34b-4.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-01-12T22:59:16Z |
2024-01-12T23:11:50+00:00
| 3 | 0 |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
tags:
- yi
- moe
---

# Bagel-Hermes-2x34B
This is the model for Bagel-Hermes-2x34B. I used [mergekit](https://github.com/cg123/mergekit) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, and [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) uses ChatML, you can utilize ChatML and other prompt templates provided by bagel.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Bagel-Hermes-2x34B-GPTQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GPTQ)
##### GGUF
- [TheBloke/Bagel-Hermes-2x34B-GGUF](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GGUF)
##### AWQ
- [TheBloke/Bagel-Hermes-2x34B-AWQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
| null |
Non_BioNLP
|

# Bagel-Hermes-2x34B
This is the model for Bagel-Hermes-2x34B. I used [mergekit](https://github.com/cg123/mergekit) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, and [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) uses ChatML, you can utilize ChatML and other prompt templates provided by bagel.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Bagel-Hermes-2x34B-GPTQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GPTQ)
##### GGUF
- [TheBloke/Bagel-Hermes-2x34B-GGUF](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GGUF)
##### AWQ
- [TheBloke/Bagel-Hermes-2x34B-AWQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
|
{"license": "other", "license_name": "yi-license", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE", "tags": ["yi", "moe"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 43,556 |
fpc/bge-micro-smiles
|
fpc
|
sentence-similarity
|
[
"sentence-transformers",
"onnx",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:3210255",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:TaylorAI/bge-micro",
"base_model:quantized:TaylorAI/bge-micro",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-09T18:52:08Z |
2024-10-20T17:48:26+00:00
| 4 | 0 |
---
base_model: TaylorAI/bge-micro
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:3210255
- loss:CachedMultipleNegativesRankingLoss
widget:
- source_sentence: donepezil hydrochloride monohydrate
sentences:
- Cn1nccc1[C@H]1CC[C@H](O[Si](C)(C)C(C)(C)C)C[C@@H]1OC(=O)c1ccccc1
- COc1cc2c(cc1OC)C(=O)C(CC1CCN(Cc3ccccc3)CC1)C2.Cl.O
- C(=O)(OC)C1=CC=C(C=C1)CC(C)=O
- source_sentence: 6-Cyclopropylmethoxy-5-(3,3-difluoro-azetidin-1-yl)-pyridine-2-carboxylic
acid tert-butyl-(5-methyl-[1,3,4]oxadiazol-2-ylmethyl)-amide
sentences:
- Cc1nnc(CN(C(=O)c2ccc(N3CC(F)(F)C3)c(OCC3CC3)n2)C(C)(C)C)o1
- COc1cccc(CCCC=C(Br)Br)c1
- CN(C)CCNC(=O)c1ccc2oc(=O)n(Cc3ccc4[nH]c(=O)[nH]c4c3)c2c1
- source_sentence: N-(2-chlorophenyl)-6,8-difluoro-N-methyl-4H-thieno[3,2-c]chromene-2-carboxamide
sentences:
- CN(C(=O)c1cc2c(s1)-c1cc(F)cc(F)c1OC2)c1ccccc1Cl
- ClC(C(=O)OCCOCC1=CC=C(C=C1)F)C
- C(C)OC(\C=C(/C)\OC1=C(C(=CC=C1F)OC(C)C)F)=O
- source_sentence: 6-[2-[(3-chlorophenyl)methyl]-1,3,3a,4,6,6a-hexahydropyrrolo[3,4-c]pyrrol-5-yl]-3-(trifluoromethyl)-[1,2,4]triazolo[4,3-b]pyridazine
sentences:
- CC(=O)OCCOCn1cc(C)c(=O)[nH]c1=O
- NC1=C(C(=NN1C1=C(C=C(C=C1Cl)C(F)(F)F)Cl)C#N)S(=O)(=O)C
- ClC=1C=C(C=CC1)CN1CC2CN(CC2C1)C=1C=CC=2N(N1)C(=NN2)C(F)(F)F
- source_sentence: (±)-cis-2-(4-methoxyphenyl)-3-acetoxy-5-[2-(dimethylamino)ethyl]-8-chloro-2,3-dihydro-1,5-benzothiazepin-4(5H)-one
hydrochloride
sentences:
- N(=[N+]=[N-])C(C(=O)C1=NC(=C(C(=N1)C(C)(C)C)O)C(C)(C)C)C
- O[C@@H]1[C@H](O)[C@@H](Oc2nc(N3CCNCC3)nc3ccccc23)C[C@H]1O
- Cl.COC1=CC=C(C=C1)[C@@H]1SC2=C(N(C([C@@H]1OC(C)=O)=O)CCN(C)C)C=CC(=C2)Cl
model-index:
- name: MPNet base trained on AllNLI triplets
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: bge micro test
type: bge-micro-test
metrics:
- type: pearson_cosine
value: .nan
name: Pearson Cosine
- type: spearman_cosine
value: .nan
name: Spearman Cosine
- type: pearson_manhattan
value: .nan
name: Pearson Manhattan
- type: spearman_manhattan
value: .nan
name: Spearman Manhattan
- type: pearson_euclidean
value: .nan
name: Pearson Euclidean
- type: spearman_euclidean
value: .nan
name: Spearman Euclidean
- type: pearson_dot
value: .nan
name: Pearson Dot
- type: spearman_dot
value: .nan
name: Spearman Dot
- type: pearson_max
value: .nan
name: Pearson Max
- type: spearman_max
value: .nan
name: Spearman Max
---
# MPNet base trained on AllNLI triplets
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [TaylorAI/bge-micro](https://huggingface.co/TaylorAI/bge-micro). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [TaylorAI/bge-micro](https://huggingface.co/TaylorAI/bge-micro) <!-- at revision 4bccbd43513eb9fecf444af6eecde76e55f4c839 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("fpc/bge-micro-smiles")
# Run inference
sentences = [
'(±)-cis-2-(4-methoxyphenyl)-3-acetoxy-5-[2-(dimethylamino)ethyl]-8-chloro-2,3-dihydro-1,5-benzothiazepin-4(5H)-one hydrochloride',
'Cl.COC1=CC=C(C=C1)[C@@H]1SC2=C(N(C([C@@H]1OC(C)=O)=O)CCN(C)C)C=CC(=C2)Cl',
'O[C@@H]1[C@H](O)[C@@H](Oc2nc(N3CCNCC3)nc3ccccc23)C[C@H]1O',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,210,255 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 42.57 tokens</li><li>max: 153 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 40.02 tokens</li><li>max: 325 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|
| <code>4-t-butylbromobenzene</code> | <code>C(C)(C)(C)C1=CC=C(C=C1)Br</code> |
| <code>1-methyl-4-(morpholine-4-carbonyl)-N-(2-phenyl-[1,2,4]triazolo[1,5-a]pyridin-7-yl)-1H-pyrazole-5-carboxamide</code> | <code>CN1N=CC(=C1C(=O)NC1=CC=2N(C=C1)N=C(N2)C2=CC=CC=C2)C(=O)N2CCOCC2</code> |
| <code>Phthalimide</code> | <code>C1(C=2C(C(N1)=O)=CC=CC2)=O</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 512
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | bge-micro-test_spearman_cosine |
|:------:|:-----:|:-------------:|:------------------------------:|
| 0.0159 | 100 | 6.1861 | - |
| 0.0319 | 200 | 6.0547 | - |
| 0.0478 | 300 | 5.6041 | - |
| 0.0638 | 400 | 4.9367 | - |
| 0.0797 | 500 | 4.3412 | - |
| 0.0957 | 600 | 3.8245 | - |
| 0.1116 | 700 | 3.3188 | - |
| 0.1276 | 800 | 2.869 | - |
| 0.1435 | 900 | 2.5149 | - |
| 0.1595 | 1000 | 2.2282 | - |
| 0.1754 | 1100 | 2.0046 | - |
| 0.1914 | 1200 | 1.8032 | - |
| 0.2073 | 1300 | 1.6289 | - |
| 0.2232 | 1400 | 1.4567 | - |
| 0.2392 | 1500 | 1.3326 | - |
| 0.2551 | 1600 | 1.2127 | - |
| 0.2711 | 1700 | 1.0909 | - |
| 0.2870 | 1800 | 1.0021 | - |
| 0.3030 | 1900 | 0.9135 | - |
| 0.3189 | 2000 | 0.8378 | - |
| 0.3349 | 2100 | 0.7758 | - |
| 0.3508 | 2200 | 0.7031 | - |
| 0.3668 | 2300 | 0.6418 | - |
| 0.3827 | 2400 | 0.5965 | - |
| 0.3987 | 2500 | 0.5461 | - |
| 0.4146 | 2600 | 0.5039 | - |
| 0.4306 | 2700 | 0.4674 | - |
| 0.4465 | 2800 | 0.4339 | - |
| 0.4624 | 2900 | 0.4045 | - |
| 0.4784 | 3000 | 0.373 | - |
| 0.4943 | 3100 | 0.3566 | - |
| 0.5103 | 3200 | 0.3348 | - |
| 0.5262 | 3300 | 0.3215 | - |
| 0.5422 | 3400 | 0.302 | - |
| 0.5581 | 3500 | 0.2826 | - |
| 0.5741 | 3600 | 0.2803 | - |
| 0.5900 | 3700 | 0.2616 | - |
| 0.6060 | 3800 | 0.2554 | - |
| 0.6219 | 3900 | 0.234 | - |
| 0.6379 | 4000 | 0.2306 | - |
| 0.6538 | 4100 | 0.2224 | - |
| 0.6697 | 4200 | 0.2141 | - |
| 0.6857 | 4300 | 0.2117 | - |
| 0.7016 | 4400 | 0.204 | - |
| 0.7176 | 4500 | 0.198 | - |
| 0.7335 | 4600 | 0.1986 | - |
| 0.7495 | 4700 | 0.1821 | - |
| 0.7654 | 4800 | 0.1813 | - |
| 0.7814 | 4900 | 0.1741 | - |
| 0.7973 | 5000 | 0.1697 | - |
| 0.8133 | 5100 | 0.1655 | - |
| 0.8292 | 5200 | 0.1623 | - |
| 0.8452 | 5300 | 0.1593 | - |
| 0.8611 | 5400 | 0.1566 | - |
| 0.8771 | 5500 | 0.151 | - |
| 0.8930 | 5600 | 0.1526 | - |
| 0.9089 | 5700 | 0.1453 | - |
| 0.9249 | 5800 | 0.1448 | - |
| 0.9408 | 5900 | 0.1369 | - |
| 0.9568 | 6000 | 0.1409 | - |
| 0.9727 | 6100 | 0.1373 | - |
| 0.9887 | 6200 | 0.133 | - |
| 1.0046 | 6300 | 0.1269 | - |
| 1.0206 | 6400 | 0.1274 | - |
| 1.0365 | 6500 | 0.1271 | - |
| 1.0525 | 6600 | 0.1216 | - |
| 1.0684 | 6700 | 0.1176 | - |
| 1.0844 | 6800 | 0.1208 | - |
| 1.1003 | 6900 | 0.1177 | - |
| 1.1162 | 7000 | 0.1175 | - |
| 1.1322 | 7100 | 0.1109 | - |
| 1.1481 | 7200 | 0.1118 | - |
| 1.1641 | 7300 | 0.1085 | - |
| 1.1800 | 7400 | 0.1155 | - |
| 1.1960 | 7500 | 0.1079 | - |
| 1.2119 | 7600 | 0.1087 | - |
| 1.2279 | 7700 | 0.1004 | - |
| 1.2438 | 7800 | 0.1084 | - |
| 1.2598 | 7900 | 0.1089 | - |
| 1.2757 | 8000 | 0.1012 | - |
| 1.2917 | 8100 | 0.1037 | - |
| 1.3076 | 8200 | 0.1004 | - |
| 1.3236 | 8300 | 0.0979 | - |
| 1.3395 | 8400 | 0.1007 | - |
| 1.3554 | 8500 | 0.0956 | - |
| 1.3714 | 8600 | 0.0972 | - |
| 1.3873 | 8700 | 0.0947 | - |
| 1.4033 | 8800 | 0.0931 | - |
| 1.4192 | 8900 | 0.0948 | - |
| 1.4352 | 9000 | 0.0925 | - |
| 1.4511 | 9100 | 0.0933 | - |
| 1.4671 | 9200 | 0.0888 | - |
| 1.4830 | 9300 | 0.0877 | - |
| 1.4990 | 9400 | 0.0889 | - |
| 1.5149 | 9500 | 0.0895 | - |
| 1.5309 | 9600 | 0.0892 | - |
| 1.5468 | 9700 | 0.089 | - |
| 1.5627 | 9800 | 0.0828 | - |
| 1.5787 | 9900 | 0.0906 | - |
| 1.5946 | 10000 | 0.0893 | - |
| 1.6106 | 10100 | 0.0849 | - |
| 1.6265 | 10200 | 0.0811 | - |
| 1.6425 | 10300 | 0.0823 | - |
| 1.6584 | 10400 | 0.0806 | - |
| 1.6744 | 10500 | 0.0815 | - |
| 1.6903 | 10600 | 0.0832 | - |
| 1.7063 | 10700 | 0.0856 | - |
| 1.7222 | 10800 | 0.081 | - |
| 1.7382 | 10900 | 0.0831 | - |
| 1.7541 | 11000 | 0.0767 | - |
| 1.7701 | 11100 | 0.0779 | - |
| 1.7860 | 11200 | 0.0792 | - |
| 1.8019 | 11300 | 0.0771 | - |
| 1.8179 | 11400 | 0.0783 | - |
| 1.8338 | 11500 | 0.0749 | - |
| 1.8498 | 11600 | 0.0755 | - |
| 1.8657 | 11700 | 0.0778 | - |
| 1.8817 | 11800 | 0.0753 | - |
| 1.8976 | 11900 | 0.0767 | - |
| 1.9136 | 12000 | 0.0725 | - |
| 1.9295 | 12100 | 0.0744 | - |
| 1.9455 | 12200 | 0.0743 | - |
| 1.9614 | 12300 | 0.0722 | - |
| 1.9774 | 12400 | 0.0712 | - |
| 1.9933 | 12500 | 0.0709 | - |
| 2.0092 | 12600 | 0.0694 | - |
| 2.0252 | 12700 | 0.0705 | - |
| 2.0411 | 12800 | 0.0715 | - |
| 2.0571 | 12900 | 0.0705 | - |
| 2.0730 | 13000 | 0.0653 | - |
| 2.0890 | 13100 | 0.0698 | - |
| 2.1049 | 13200 | 0.0676 | - |
| 2.1209 | 13300 | 0.0684 | - |
| 2.1368 | 13400 | 0.0644 | - |
| 2.1528 | 13500 | 0.0652 | - |
| 2.1687 | 13600 | 0.0673 | - |
| 2.1847 | 13700 | 0.067 | - |
| 2.2006 | 13800 | 0.0645 | - |
| 2.2166 | 13900 | 0.0633 | - |
| 2.2325 | 14000 | 0.0645 | - |
| 2.2484 | 14100 | 0.0698 | - |
| 2.2644 | 14200 | 0.0655 | - |
| 2.2803 | 14300 | 0.0654 | - |
| 2.2963 | 14400 | 0.0656 | - |
| 2.3122 | 14500 | 0.0631 | - |
| 2.3282 | 14600 | 0.0628 | - |
| 2.3441 | 14700 | 0.0671 | - |
| 2.3601 | 14800 | 0.0659 | - |
| 2.3760 | 14900 | 0.0619 | - |
| 2.3920 | 15000 | 0.0618 | - |
| 2.4079 | 15100 | 0.0624 | - |
| 2.4239 | 15200 | 0.0616 | - |
| 2.4398 | 15300 | 0.0631 | - |
| 2.4557 | 15400 | 0.0639 | - |
| 2.4717 | 15500 | 0.0585 | - |
| 2.4876 | 15600 | 0.0607 | - |
| 2.5036 | 15700 | 0.0615 | - |
| 2.5195 | 15800 | 0.062 | - |
| 2.5355 | 15900 | 0.0621 | - |
| 2.5514 | 16000 | 0.0608 | - |
| 2.5674 | 16100 | 0.0594 | - |
| 2.5833 | 16200 | 0.0631 | - |
| 2.5993 | 16300 | 0.0635 | - |
| 2.6152 | 16400 | 0.06 | - |
| 2.6312 | 16500 | 0.0581 | - |
| 2.6471 | 16600 | 0.0607 | - |
| 2.6631 | 16700 | 0.0577 | - |
| 2.6790 | 16800 | 0.0592 | - |
| 2.6949 | 16900 | 0.0625 | - |
| 2.7109 | 17000 | 0.0622 | - |
| 2.7268 | 17100 | 0.0573 | - |
| 2.7428 | 17200 | 0.0613 | - |
| 2.7587 | 17300 | 0.0587 | - |
| 2.7747 | 17400 | 0.0587 | - |
| 2.7906 | 17500 | 0.0588 | - |
| 2.8066 | 17600 | 0.0568 | - |
| 2.8225 | 17700 | 0.0573 | - |
| 2.8385 | 17800 | 0.0575 | - |
| 2.8544 | 17900 | 0.0575 | - |
| 2.8704 | 18000 | 0.0582 | - |
| 2.8863 | 18100 | 0.0577 | - |
| 2.9022 | 18200 | 0.057 | - |
| 2.9182 | 18300 | 0.0572 | - |
| 2.9341 | 18400 | 0.0558 | - |
| 2.9501 | 18500 | 0.0578 | - |
| 2.9660 | 18600 | 0.0567 | - |
| 2.9820 | 18700 | 0.0569 | - |
| 2.9979 | 18800 | 0.0547 | - |
| 3.0139 | 18900 | 0.0542 | - |
| 3.0298 | 19000 | 0.0563 | - |
| 3.0458 | 19100 | 0.0549 | - |
| 3.0617 | 19200 | 0.0531 | - |
| 3.0777 | 19300 | 0.053 | - |
| 3.0936 | 19400 | 0.0557 | - |
| 3.1096 | 19500 | 0.0546 | - |
| 3.1255 | 19600 | 0.0518 | - |
| 3.1414 | 19700 | 0.0517 | - |
| 3.1574 | 19800 | 0.0528 | - |
| 3.1733 | 19900 | 0.0551 | - |
| 3.1893 | 20000 | 0.0544 | - |
| 3.2052 | 20100 | 0.0526 | - |
| 3.2212 | 20200 | 0.0494 | - |
| 3.2371 | 20300 | 0.0537 | - |
| 3.2531 | 20400 | 0.0568 | - |
| 3.2690 | 20500 | 0.0525 | - |
| 3.2850 | 20600 | 0.0566 | - |
| 3.3009 | 20700 | 0.0539 | - |
| 3.3169 | 20800 | 0.0531 | - |
| 3.3328 | 20900 | 0.0524 | - |
| 3.3487 | 21000 | 0.0543 | - |
| 3.3647 | 21100 | 0.0537 | - |
| 3.3806 | 21200 | 0.0524 | - |
| 3.3966 | 21300 | 0.0516 | - |
| 3.4125 | 21400 | 0.0537 | - |
| 3.4285 | 21500 | 0.0515 | - |
| 3.4444 | 21600 | 0.0537 | - |
| 3.4604 | 21700 | 0.0526 | - |
| 3.4763 | 21800 | 0.0508 | - |
| 3.4923 | 21900 | 0.0526 | - |
| 3.5082 | 22000 | 0.0521 | - |
| 3.5242 | 22100 | 0.054 | - |
| 3.5401 | 22200 | 0.053 | - |
| 3.5561 | 22300 | 0.0509 | - |
| 3.5720 | 22400 | 0.0526 | - |
| 3.5879 | 22500 | 0.0551 | - |
| 3.6039 | 22600 | 0.0556 | - |
| 3.6198 | 22700 | 0.0497 | - |
| 3.6358 | 22800 | 0.0515 | - |
| 3.6517 | 22900 | 0.0514 | - |
| 3.6677 | 23000 | 0.0503 | - |
| 3.6836 | 23100 | 0.0515 | - |
| 3.6996 | 23200 | 0.0553 | - |
| 3.7155 | 23300 | 0.0519 | - |
| 3.7315 | 23400 | 0.0549 | - |
| 3.7474 | 23500 | 0.0522 | - |
| 3.7634 | 23600 | 0.0526 | - |
| 3.7793 | 23700 | 0.0525 | - |
| 3.7952 | 23800 | 0.051 | - |
| 3.8112 | 23900 | 0.0509 | - |
| 3.8271 | 24000 | 0.0503 | - |
| 3.8431 | 24100 | 0.0524 | - |
| 3.8590 | 24200 | 0.0526 | - |
| 3.8750 | 24300 | 0.0512 | - |
| 3.8909 | 24400 | 0.0518 | - |
| 3.9069 | 24500 | 0.0521 | - |
| 3.9228 | 24600 | 0.0524 | - |
| 3.9388 | 24700 | 0.051 | - |
| 3.9547 | 24800 | 0.0535 | - |
| 3.9707 | 24900 | 0.0508 | - |
| 3.9866 | 25000 | 0.0514 | - |
| 4.0 | 25084 | - | nan |
</details>
### Framework Versions
- Python: 3.10.9
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.4.1+cu124
- Accelerate: 0.33.0
- Datasets: 2.18.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# MPNet base trained on AllNLI triplets
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [TaylorAI/bge-micro](https://huggingface.co/TaylorAI/bge-micro). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [TaylorAI/bge-micro](https://huggingface.co/TaylorAI/bge-micro) <!-- at revision 4bccbd43513eb9fecf444af6eecde76e55f4c839 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("fpc/bge-micro-smiles")
# Run inference
sentences = [
'(±)-cis-2-(4-methoxyphenyl)-3-acetoxy-5-[2-(dimethylamino)ethyl]-8-chloro-2,3-dihydro-1,5-benzothiazepin-4(5H)-one hydrochloride',
'Cl.COC1=CC=C(C=C1)[C@@H]1SC2=C(N(C([C@@H]1OC(C)=O)=O)CCN(C)C)C=CC(=C2)Cl',
'O[C@@H]1[C@H](O)[C@@H](Oc2nc(N3CCNCC3)nc3ccccc23)C[C@H]1O',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,210,255 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 42.57 tokens</li><li>max: 153 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 40.02 tokens</li><li>max: 325 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|
| <code>4-t-butylbromobenzene</code> | <code>C(C)(C)(C)C1=CC=C(C=C1)Br</code> |
| <code>1-methyl-4-(morpholine-4-carbonyl)-N-(2-phenyl-[1,2,4]triazolo[1,5-a]pyridin-7-yl)-1H-pyrazole-5-carboxamide</code> | <code>CN1N=CC(=C1C(=O)NC1=CC=2N(C=C1)N=C(N2)C2=CC=CC=C2)C(=O)N2CCOCC2</code> |
| <code>Phthalimide</code> | <code>C1(C=2C(C(N1)=O)=CC=CC2)=O</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 512
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | bge-micro-test_spearman_cosine |
|:------:|:-----:|:-------------:|:------------------------------:|
| 0.0159 | 100 | 6.1861 | - |
| 0.0319 | 200 | 6.0547 | - |
| 0.0478 | 300 | 5.6041 | - |
| 0.0638 | 400 | 4.9367 | - |
| 0.0797 | 500 | 4.3412 | - |
| 0.0957 | 600 | 3.8245 | - |
| 0.1116 | 700 | 3.3188 | - |
| 0.1276 | 800 | 2.869 | - |
| 0.1435 | 900 | 2.5149 | - |
| 0.1595 | 1000 | 2.2282 | - |
| 0.1754 | 1100 | 2.0046 | - |
| 0.1914 | 1200 | 1.8032 | - |
| 0.2073 | 1300 | 1.6289 | - |
| 0.2232 | 1400 | 1.4567 | - |
| 0.2392 | 1500 | 1.3326 | - |
| 0.2551 | 1600 | 1.2127 | - |
| 0.2711 | 1700 | 1.0909 | - |
| 0.2870 | 1800 | 1.0021 | - |
| 0.3030 | 1900 | 0.9135 | - |
| 0.3189 | 2000 | 0.8378 | - |
| 0.3349 | 2100 | 0.7758 | - |
| 0.3508 | 2200 | 0.7031 | - |
| 0.3668 | 2300 | 0.6418 | - |
| 0.3827 | 2400 | 0.5965 | - |
| 0.3987 | 2500 | 0.5461 | - |
| 0.4146 | 2600 | 0.5039 | - |
| 0.4306 | 2700 | 0.4674 | - |
| 0.4465 | 2800 | 0.4339 | - |
| 0.4624 | 2900 | 0.4045 | - |
| 0.4784 | 3000 | 0.373 | - |
| 0.4943 | 3100 | 0.3566 | - |
| 0.5103 | 3200 | 0.3348 | - |
| 0.5262 | 3300 | 0.3215 | - |
| 0.5422 | 3400 | 0.302 | - |
| 0.5581 | 3500 | 0.2826 | - |
| 0.5741 | 3600 | 0.2803 | - |
| 0.5900 | 3700 | 0.2616 | - |
| 0.6060 | 3800 | 0.2554 | - |
| 0.6219 | 3900 | 0.234 | - |
| 0.6379 | 4000 | 0.2306 | - |
| 0.6538 | 4100 | 0.2224 | - |
| 0.6697 | 4200 | 0.2141 | - |
| 0.6857 | 4300 | 0.2117 | - |
| 0.7016 | 4400 | 0.204 | - |
| 0.7176 | 4500 | 0.198 | - |
| 0.7335 | 4600 | 0.1986 | - |
| 0.7495 | 4700 | 0.1821 | - |
| 0.7654 | 4800 | 0.1813 | - |
| 0.7814 | 4900 | 0.1741 | - |
| 0.7973 | 5000 | 0.1697 | - |
| 0.8133 | 5100 | 0.1655 | - |
| 0.8292 | 5200 | 0.1623 | - |
| 0.8452 | 5300 | 0.1593 | - |
| 0.8611 | 5400 | 0.1566 | - |
| 0.8771 | 5500 | 0.151 | - |
| 0.8930 | 5600 | 0.1526 | - |
| 0.9089 | 5700 | 0.1453 | - |
| 0.9249 | 5800 | 0.1448 | - |
| 0.9408 | 5900 | 0.1369 | - |
| 0.9568 | 6000 | 0.1409 | - |
| 0.9727 | 6100 | 0.1373 | - |
| 0.9887 | 6200 | 0.133 | - |
| 1.0046 | 6300 | 0.1269 | - |
| 1.0206 | 6400 | 0.1274 | - |
| 1.0365 | 6500 | 0.1271 | - |
| 1.0525 | 6600 | 0.1216 | - |
| 1.0684 | 6700 | 0.1176 | - |
| 1.0844 | 6800 | 0.1208 | - |
| 1.1003 | 6900 | 0.1177 | - |
| 1.1162 | 7000 | 0.1175 | - |
| 1.1322 | 7100 | 0.1109 | - |
| 1.1481 | 7200 | 0.1118 | - |
| 1.1641 | 7300 | 0.1085 | - |
| 1.1800 | 7400 | 0.1155 | - |
| 1.1960 | 7500 | 0.1079 | - |
| 1.2119 | 7600 | 0.1087 | - |
| 1.2279 | 7700 | 0.1004 | - |
| 1.2438 | 7800 | 0.1084 | - |
| 1.2598 | 7900 | 0.1089 | - |
| 1.2757 | 8000 | 0.1012 | - |
| 1.2917 | 8100 | 0.1037 | - |
| 1.3076 | 8200 | 0.1004 | - |
| 1.3236 | 8300 | 0.0979 | - |
| 1.3395 | 8400 | 0.1007 | - |
| 1.3554 | 8500 | 0.0956 | - |
| 1.3714 | 8600 | 0.0972 | - |
| 1.3873 | 8700 | 0.0947 | - |
| 1.4033 | 8800 | 0.0931 | - |
| 1.4192 | 8900 | 0.0948 | - |
| 1.4352 | 9000 | 0.0925 | - |
| 1.4511 | 9100 | 0.0933 | - |
| 1.4671 | 9200 | 0.0888 | - |
| 1.4830 | 9300 | 0.0877 | - |
| 1.4990 | 9400 | 0.0889 | - |
| 1.5149 | 9500 | 0.0895 | - |
| 1.5309 | 9600 | 0.0892 | - |
| 1.5468 | 9700 | 0.089 | - |
| 1.5627 | 9800 | 0.0828 | - |
| 1.5787 | 9900 | 0.0906 | - |
| 1.5946 | 10000 | 0.0893 | - |
| 1.6106 | 10100 | 0.0849 | - |
| 1.6265 | 10200 | 0.0811 | - |
| 1.6425 | 10300 | 0.0823 | - |
| 1.6584 | 10400 | 0.0806 | - |
| 1.6744 | 10500 | 0.0815 | - |
| 1.6903 | 10600 | 0.0832 | - |
| 1.7063 | 10700 | 0.0856 | - |
| 1.7222 | 10800 | 0.081 | - |
| 1.7382 | 10900 | 0.0831 | - |
| 1.7541 | 11000 | 0.0767 | - |
| 1.7701 | 11100 | 0.0779 | - |
| 1.7860 | 11200 | 0.0792 | - |
| 1.8019 | 11300 | 0.0771 | - |
| 1.8179 | 11400 | 0.0783 | - |
| 1.8338 | 11500 | 0.0749 | - |
| 1.8498 | 11600 | 0.0755 | - |
| 1.8657 | 11700 | 0.0778 | - |
| 1.8817 | 11800 | 0.0753 | - |
| 1.8976 | 11900 | 0.0767 | - |
| 1.9136 | 12000 | 0.0725 | - |
| 1.9295 | 12100 | 0.0744 | - |
| 1.9455 | 12200 | 0.0743 | - |
| 1.9614 | 12300 | 0.0722 | - |
| 1.9774 | 12400 | 0.0712 | - |
| 1.9933 | 12500 | 0.0709 | - |
| 2.0092 | 12600 | 0.0694 | - |
| 2.0252 | 12700 | 0.0705 | - |
| 2.0411 | 12800 | 0.0715 | - |
| 2.0571 | 12900 | 0.0705 | - |
| 2.0730 | 13000 | 0.0653 | - |
| 2.0890 | 13100 | 0.0698 | - |
| 2.1049 | 13200 | 0.0676 | - |
| 2.1209 | 13300 | 0.0684 | - |
| 2.1368 | 13400 | 0.0644 | - |
| 2.1528 | 13500 | 0.0652 | - |
| 2.1687 | 13600 | 0.0673 | - |
| 2.1847 | 13700 | 0.067 | - |
| 2.2006 | 13800 | 0.0645 | - |
| 2.2166 | 13900 | 0.0633 | - |
| 2.2325 | 14000 | 0.0645 | - |
| 2.2484 | 14100 | 0.0698 | - |
| 2.2644 | 14200 | 0.0655 | - |
| 2.2803 | 14300 | 0.0654 | - |
| 2.2963 | 14400 | 0.0656 | - |
| 2.3122 | 14500 | 0.0631 | - |
| 2.3282 | 14600 | 0.0628 | - |
| 2.3441 | 14700 | 0.0671 | - |
| 2.3601 | 14800 | 0.0659 | - |
| 2.3760 | 14900 | 0.0619 | - |
| 2.3920 | 15000 | 0.0618 | - |
| 2.4079 | 15100 | 0.0624 | - |
| 2.4239 | 15200 | 0.0616 | - |
| 2.4398 | 15300 | 0.0631 | - |
| 2.4557 | 15400 | 0.0639 | - |
| 2.4717 | 15500 | 0.0585 | - |
| 2.4876 | 15600 | 0.0607 | - |
| 2.5036 | 15700 | 0.0615 | - |
| 2.5195 | 15800 | 0.062 | - |
| 2.5355 | 15900 | 0.0621 | - |
| 2.5514 | 16000 | 0.0608 | - |
| 2.5674 | 16100 | 0.0594 | - |
| 2.5833 | 16200 | 0.0631 | - |
| 2.5993 | 16300 | 0.0635 | - |
| 2.6152 | 16400 | 0.06 | - |
| 2.6312 | 16500 | 0.0581 | - |
| 2.6471 | 16600 | 0.0607 | - |
| 2.6631 | 16700 | 0.0577 | - |
| 2.6790 | 16800 | 0.0592 | - |
| 2.6949 | 16900 | 0.0625 | - |
| 2.7109 | 17000 | 0.0622 | - |
| 2.7268 | 17100 | 0.0573 | - |
| 2.7428 | 17200 | 0.0613 | - |
| 2.7587 | 17300 | 0.0587 | - |
| 2.7747 | 17400 | 0.0587 | - |
| 2.7906 | 17500 | 0.0588 | - |
| 2.8066 | 17600 | 0.0568 | - |
| 2.8225 | 17700 | 0.0573 | - |
| 2.8385 | 17800 | 0.0575 | - |
| 2.8544 | 17900 | 0.0575 | - |
| 2.8704 | 18000 | 0.0582 | - |
| 2.8863 | 18100 | 0.0577 | - |
| 2.9022 | 18200 | 0.057 | - |
| 2.9182 | 18300 | 0.0572 | - |
| 2.9341 | 18400 | 0.0558 | - |
| 2.9501 | 18500 | 0.0578 | - |
| 2.9660 | 18600 | 0.0567 | - |
| 2.9820 | 18700 | 0.0569 | - |
| 2.9979 | 18800 | 0.0547 | - |
| 3.0139 | 18900 | 0.0542 | - |
| 3.0298 | 19000 | 0.0563 | - |
| 3.0458 | 19100 | 0.0549 | - |
| 3.0617 | 19200 | 0.0531 | - |
| 3.0777 | 19300 | 0.053 | - |
| 3.0936 | 19400 | 0.0557 | - |
| 3.1096 | 19500 | 0.0546 | - |
| 3.1255 | 19600 | 0.0518 | - |
| 3.1414 | 19700 | 0.0517 | - |
| 3.1574 | 19800 | 0.0528 | - |
| 3.1733 | 19900 | 0.0551 | - |
| 3.1893 | 20000 | 0.0544 | - |
| 3.2052 | 20100 | 0.0526 | - |
| 3.2212 | 20200 | 0.0494 | - |
| 3.2371 | 20300 | 0.0537 | - |
| 3.2531 | 20400 | 0.0568 | - |
| 3.2690 | 20500 | 0.0525 | - |
| 3.2850 | 20600 | 0.0566 | - |
| 3.3009 | 20700 | 0.0539 | - |
| 3.3169 | 20800 | 0.0531 | - |
| 3.3328 | 20900 | 0.0524 | - |
| 3.3487 | 21000 | 0.0543 | - |
| 3.3647 | 21100 | 0.0537 | - |
| 3.3806 | 21200 | 0.0524 | - |
| 3.3966 | 21300 | 0.0516 | - |
| 3.4125 | 21400 | 0.0537 | - |
| 3.4285 | 21500 | 0.0515 | - |
| 3.4444 | 21600 | 0.0537 | - |
| 3.4604 | 21700 | 0.0526 | - |
| 3.4763 | 21800 | 0.0508 | - |
| 3.4923 | 21900 | 0.0526 | - |
| 3.5082 | 22000 | 0.0521 | - |
| 3.5242 | 22100 | 0.054 | - |
| 3.5401 | 22200 | 0.053 | - |
| 3.5561 | 22300 | 0.0509 | - |
| 3.5720 | 22400 | 0.0526 | - |
| 3.5879 | 22500 | 0.0551 | - |
| 3.6039 | 22600 | 0.0556 | - |
| 3.6198 | 22700 | 0.0497 | - |
| 3.6358 | 22800 | 0.0515 | - |
| 3.6517 | 22900 | 0.0514 | - |
| 3.6677 | 23000 | 0.0503 | - |
| 3.6836 | 23100 | 0.0515 | - |
| 3.6996 | 23200 | 0.0553 | - |
| 3.7155 | 23300 | 0.0519 | - |
| 3.7315 | 23400 | 0.0549 | - |
| 3.7474 | 23500 | 0.0522 | - |
| 3.7634 | 23600 | 0.0526 | - |
| 3.7793 | 23700 | 0.0525 | - |
| 3.7952 | 23800 | 0.051 | - |
| 3.8112 | 23900 | 0.0509 | - |
| 3.8271 | 24000 | 0.0503 | - |
| 3.8431 | 24100 | 0.0524 | - |
| 3.8590 | 24200 | 0.0526 | - |
| 3.8750 | 24300 | 0.0512 | - |
| 3.8909 | 24400 | 0.0518 | - |
| 3.9069 | 24500 | 0.0521 | - |
| 3.9228 | 24600 | 0.0524 | - |
| 3.9388 | 24700 | 0.051 | - |
| 3.9547 | 24800 | 0.0535 | - |
| 3.9707 | 24900 | 0.0508 | - |
| 3.9866 | 25000 | 0.0514 | - |
| 4.0 | 25084 | - | nan |
</details>
### Framework Versions
- Python: 3.10.9
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.4.1+cu124
- Accelerate: 0.33.0
- Datasets: 2.18.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "TaylorAI/bge-micro", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:3210255", "loss:CachedMultipleNegativesRankingLoss"], "widget": [{"source_sentence": "donepezil hydrochloride monohydrate", "sentences": ["Cn1nccc1[C@H]1CC[C@H](O[Si](C)(C)C(C)(C)C)C[C@@H]1OC(=O)c1ccccc1", "COc1cc2c(cc1OC)C(=O)C(CC1CCN(Cc3ccccc3)CC1)C2.Cl.O", "C(=O)(OC)C1=CC=C(C=C1)CC(C)=O"]}, {"source_sentence": "6-Cyclopropylmethoxy-5-(3,3-difluoro-azetidin-1-yl)-pyridine-2-carboxylic acid tert-butyl-(5-methyl-[1,3,4]oxadiazol-2-ylmethyl)-amide", "sentences": ["Cc1nnc(CN(C(=O)c2ccc(N3CC(F)(F)C3)c(OCC3CC3)n2)C(C)(C)C)o1", "COc1cccc(CCCC=C(Br)Br)c1", "CN(C)CCNC(=O)c1ccc2oc(=O)n(Cc3ccc4[nH]c(=O)[nH]c4c3)c2c1"]}, {"source_sentence": "N-(2-chlorophenyl)-6,8-difluoro-N-methyl-4H-thieno[3,2-c]chromene-2-carboxamide", "sentences": ["CN(C(=O)c1cc2c(s1)-c1cc(F)cc(F)c1OC2)c1ccccc1Cl", "ClC(C(=O)OCCOCC1=CC=C(C=C1)F)C", "C(C)OC(\\C=C(/C)\\OC1=C(C(=CC=C1F)OC(C)C)F)=O"]}, {"source_sentence": "6-[2-[(3-chlorophenyl)methyl]-1,3,3a,4,6,6a-hexahydropyrrolo[3,4-c]pyrrol-5-yl]-3-(trifluoromethyl)-[1,2,4]triazolo[4,3-b]pyridazine", "sentences": ["CC(=O)OCCOCn1cc(C)c(=O)[nH]c1=O", "NC1=C(C(=NN1C1=C(C=C(C=C1Cl)C(F)(F)F)Cl)C#N)S(=O)(=O)C", "ClC=1C=C(C=CC1)CN1CC2CN(CC2C1)C=1C=CC=2N(N1)C(=NN2)C(F)(F)F"]}, {"source_sentence": "(±)-cis-2-(4-methoxyphenyl)-3-acetoxy-5-[2-(dimethylamino)ethyl]-8-chloro-2,3-dihydro-1,5-benzothiazepin-4(5H)-one hydrochloride", "sentences": ["N(=[N+]=[N-])C(C(=O)C1=NC(=C(C(=N1)C(C)(C)C)O)C(C)(C)C)C", "O[C@@H]1[C@H](O)[C@@H](Oc2nc(N3CCNCC3)nc3ccccc23)C[C@H]1O", "Cl.COC1=CC=C(C=C1)[C@@H]1SC2=C(N(C([C@@H]1OC(C)=O)=O)CCN(C)C)C=CC(=C2)Cl"]}], "model-index": [{"name": "MPNet base trained on AllNLI triplets", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "bge micro test", "type": "bge-micro-test"}, "metrics": [{"type": "pearson_cosine", "value": NaN, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": NaN, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": NaN, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": NaN, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": NaN, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": NaN, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": NaN, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": NaN, "name": "Spearman Dot"}, {"type": "pearson_max", "value": NaN, "name": "Pearson Max"}, {"type": "spearman_max", "value": NaN, "name": "Spearman Max"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 43,557 |
twidfeel/distilbert-base-uncased-distilled-clinc
|
twidfeel
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-20T00:15:42Z |
2023-06-20T00:25:44+00:00
| 14 | 0 |
---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.9470967741935484
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2389
- Accuracy: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9829 | 1.0 | 318 | 1.3786 | 0.7284 |
| 1.0665 | 2.0 | 636 | 0.6878 | 0.8642 |
| 0.5642 | 3.0 | 954 | 0.4058 | 0.9126 |
| 0.3514 | 4.0 | 1272 | 0.3042 | 0.9339 |
| 0.2656 | 5.0 | 1590 | 0.2701 | 0.94 |
| 0.2305 | 6.0 | 1908 | 0.2532 | 0.9442 |
| 0.2131 | 7.0 | 2226 | 0.2462 | 0.9458 |
| 0.2031 | 8.0 | 2544 | 0.2409 | 0.9471 |
| 0.1975 | 9.0 | 2862 | 0.2401 | 0.9461 |
| 0.1953 | 10.0 | 3180 | 0.2389 | 0.9471 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2389
- Accuracy: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9829 | 1.0 | 318 | 1.3786 | 0.7284 |
| 1.0665 | 2.0 | 636 | 0.6878 | 0.8642 |
| 0.5642 | 3.0 | 954 | 0.4058 | 0.9126 |
| 0.3514 | 4.0 | 1272 | 0.3042 | 0.9339 |
| 0.2656 | 5.0 | 1590 | 0.2701 | 0.94 |
| 0.2305 | 6.0 | 1908 | 0.2532 | 0.9442 |
| 0.2131 | 7.0 | 2226 | 0.2462 | 0.9458 |
| 0.2031 | 8.0 | 2544 | 0.2409 | 0.9471 |
| 0.1975 | 9.0 | 2862 | 0.2401 | 0.9461 |
| 0.1953 | 10.0 | 3180 | 0.2389 | 0.9471 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9470967741935484, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,558 |
Luismatoslm/summarization_model_save
|
Luismatoslm
|
text2text-generation
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-09T09:56:12Z |
2024-01-10T10:54:57+00:00
| 3 | 0 |
---
base_model: t5-small
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Luismatoslm/summarization_model_save
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Luismatoslm/summarization_model_save
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6371
- Validation Loss: 2.9379
- Train Rouge1: 0.133
- Train Rouge2: 0.0436
- Train Rougel: 0.112
- Train Rougelsum: 0.1118
- Train Gen Len: 19.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 3.6371 | 2.9379 | 0.133 | 0.0436 | 0.112 | 0.1118 | 19.0 | 0 |
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.11.0
- Datasets 2.16.1
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Luismatoslm/summarization_model_save
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6371
- Validation Loss: 2.9379
- Train Rouge1: 0.133
- Train Rouge2: 0.0436
- Train Rougel: 0.112
- Train Rougelsum: 0.1118
- Train Gen Len: 19.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 3.6371 | 2.9379 | 0.133 | 0.0436 | 0.112 | 0.1118 | 19.0 | 0 |
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.11.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
{"base_model": "t5-small", "license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Luismatoslm/summarization_model_save", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 43,559 |
AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon
|
AdamCodd
|
text-classification
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"dataset:amazon_polarity",
"base_model:distilbert/distilbert-base-uncased",
"base_model:quantized:distilbert/distilbert-base-uncased",
"doi:10.57967/hf/4238",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-06T23:02:11Z |
2023-11-10T17:35:29+00:00
| 136 | 7 |
---
base_model: distilbert-base-uncased
datasets:
- amazon_polarity
license: apache-2.0
model-index:
- name: distilbert-base-uncased-finetuned-sentiment-amazon
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: amazon_polarity
type: sentiment
args: default
metrics:
- type: accuracy
value: 0.961
name: Accuracy
- type: loss
value: 0.116
name: Loss
- type: f1
value: 0.96
name: F1
- task:
type: text-classification
name: Text Classification
dataset:
name: amazon_polarity
type: amazon_polarity
config: amazon_polarity
split: test
metrics:
- type: accuracy
value: 0.94112
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzlmMzdhYjNmN2U0NDBkM2U5ZDgwNzc3YjE1OGE4MWUxMDY1N2U0ODc0YzllODE5ODIyMzdkOWFhNzVjYmI5MyIsInZlcnNpb24iOjF9.3nlcLa4IpPQtklp7_U9XzC__Q_JVf_cWs6JVVII8trhX5zg_q9HEyQOQs4sRf6O-lIJg8zb3mgobZDJShuSJAQ
- type: precision
value: 0.9321570625232675
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjI2MDY4NGNlYjhjMGMxODBiNTc2ZjM5YzY1NjkxNTU4MDA2ZDIyY2QyZjUyZmE4YWY0N2Y1ODU5YTc2ZDM0NiIsInZlcnNpb24iOjF9.egEikTa2UyHV6SAGkHJKaa8FRwGHoZmJRCmqUQaJqeF5yxkz2V-WeCHoWDrCXsHCbXEs8UhLlyo7Lr83BPfkBg
- type: recall
value: 0.95149
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2E3M2Y3MDU4ZTM2YjdlZjQ0NTY3NGYwMmQ3NTk5ZmZkZWUwZWZiZDZjNjk2ZWE5MmY4MmZiM2FmN2U2M2QyNCIsInZlcnNpb24iOjF9.4VNbiWRmSee4cxuIZ5m7bN30i4BpK7xtHQ1BF8AuFIXkWQgzOmGdX35bLhLGWW8KL3ClA4RDPVBKYCIrw0YUBw
- type: auc
value: 0.9849019044624999
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTkwODk2ZTUwOTViNjBhYTU0ODk1MDA3MDY1NDkyZDc2YmRlNTQzNDE3YmE3YTVkYjNhN2JmMDAxZWQ0NjUxZSIsInZlcnNpb24iOjF9.YEr6OhqOL7QnqYqjUTQFMdkgU_uS1-vVnkJtn_-1UwSoX754UV_bL9S9KSH3DX4m5QFoRXdZxfeOocm1JbzaCA
- type: f1
value: 0.9417243188138998
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzIyMmViNTQ3ZGU0M2I5ZmRjOGI1OWMwZGEwYmE5OGU5YTZlZTkzZjdkOTQ4YzJmOTc2MDliMDY4NDQ1NGRlNyIsInZlcnNpb24iOjF9.p05MGHTfHTAzp4u-qfiIn6Zmh5c3TW_uwjXWgbb982pL_oCILQb6jFXqhPpWXL321fPye7qaUVbGhcTJd8sdCA
- type: loss
value: 0.16342754662036896
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgxMDc4M2IxYjhkNjRhZmYyNzY1MTNkNzhmYjk2NmU1NjFiOTk1NDIzNzI1ZGU3MDYyYjQ2YmQ1NTI2N2NhMyIsInZlcnNpb24iOjF9.Zuf0nzn8XdvwRChKtE9CwJ0pgpc6Zey6oTR3jRiSkvNY2sNbo2bvAgFimGzgGYkDvRvYkTCXzCyxdb27l3QnAg
---
# distilbert-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a subset of the [amazon-polarity dataset](https://huggingface.co/datasets/amazon_polarity).
<b>[Update 10/10/23]</b> The model has been retrained on a larger part of the dataset with an improvement on the loss, f1 score and accuracy. It achieves the following results on the evaluation set:
- Loss: 0.116
- Accuracy: 0.961
- F1_score: 0.960
## Model description
This sentiment classifier has been trained on 360_000 samples for the training set, 40_000 samples for the validation set and 40_000 samples for the test set.
## Intended uses & limitations
```python
from transformers import pipeline
# Create the pipeline
sentiment_classifier = pipeline('text-classification', model='AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon')
# Now you can use the pipeline to get the sentiment
result = sentiment_classifier("This product doesn't fit me at all.")
print(result)
#[{'label': 'negative', 'score': 0.9994848966598511}]
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1270
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 2
- weight_decay: 0.01
### Training results
(Previous results before retraining from the model evaluator)
| key | value |
| --- | ----- |
| eval_accuracy | 0.94112 |
| eval_auc | 0.9849 |
| eval_f1_score | 0.9417 |
| eval_precision | 0.9321 |
| eval_recall | 0.95149 |
### Framework versions
- Transformers 4.34.0
- Pytorch lightning 2.0.9
- Tokenizers 0.14.0
If you want to support me, you can [here](https://ko-fi.com/adamcodd).
| null |
Non_BioNLP
|
# distilbert-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a subset of the [amazon-polarity dataset](https://huggingface.co/datasets/amazon_polarity).
<b>[Update 10/10/23]</b> The model has been retrained on a larger part of the dataset with an improvement on the loss, f1 score and accuracy. It achieves the following results on the evaluation set:
- Loss: 0.116
- Accuracy: 0.961
- F1_score: 0.960
## Model description
This sentiment classifier has been trained on 360_000 samples for the training set, 40_000 samples for the validation set and 40_000 samples for the test set.
## Intended uses & limitations
```python
from transformers import pipeline
# Create the pipeline
sentiment_classifier = pipeline('text-classification', model='AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon')
# Now you can use the pipeline to get the sentiment
result = sentiment_classifier("This product doesn't fit me at all.")
print(result)
#[{'label': 'negative', 'score': 0.9994848966598511}]
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1270
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 2
- weight_decay: 0.01
### Training results
(Previous results before retraining from the model evaluator)
| key | value |
| --- | ----- |
| eval_accuracy | 0.94112 |
| eval_auc | 0.9849 |
| eval_f1_score | 0.9417 |
| eval_precision | 0.9321 |
| eval_recall | 0.95149 |
### Framework versions
- Transformers 4.34.0
- Pytorch lightning 2.0.9
- Tokenizers 0.14.0
If you want to support me, you can [here](https://ko-fi.com/adamcodd).
|
{"base_model": "distilbert-base-uncased", "datasets": ["amazon_polarity"], "license": "apache-2.0", "model-index": [{"name": "distilbert-base-uncased-finetuned-sentiment-amazon", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_polarity", "type": "sentiment", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.961, "name": "Accuracy"}, {"type": "loss", "value": 0.116, "name": "Loss"}, {"type": "f1", "value": 0.96, "name": "F1"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_polarity", "type": "amazon_polarity", "config": "amazon_polarity", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.94112, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzlmMzdhYjNmN2U0NDBkM2U5ZDgwNzc3YjE1OGE4MWUxMDY1N2U0ODc0YzllODE5ODIyMzdkOWFhNzVjYmI5MyIsInZlcnNpb24iOjF9.3nlcLa4IpPQtklp7_U9XzC__Q_JVf_cWs6JVVII8trhX5zg_q9HEyQOQs4sRf6O-lIJg8zb3mgobZDJShuSJAQ"}, {"type": "precision", "value": 0.9321570625232675, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjI2MDY4NGNlYjhjMGMxODBiNTc2ZjM5YzY1NjkxNTU4MDA2ZDIyY2QyZjUyZmE4YWY0N2Y1ODU5YTc2ZDM0NiIsInZlcnNpb24iOjF9.egEikTa2UyHV6SAGkHJKaa8FRwGHoZmJRCmqUQaJqeF5yxkz2V-WeCHoWDrCXsHCbXEs8UhLlyo7Lr83BPfkBg"}, {"type": "recall", "value": 0.95149, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2E3M2Y3MDU4ZTM2YjdlZjQ0NTY3NGYwMmQ3NTk5ZmZkZWUwZWZiZDZjNjk2ZWE5MmY4MmZiM2FmN2U2M2QyNCIsInZlcnNpb24iOjF9.4VNbiWRmSee4cxuIZ5m7bN30i4BpK7xtHQ1BF8AuFIXkWQgzOmGdX35bLhLGWW8KL3ClA4RDPVBKYCIrw0YUBw"}, {"type": "auc", "value": 0.9849019044624999, "name": "AUC", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTkwODk2ZTUwOTViNjBhYTU0ODk1MDA3MDY1NDkyZDc2YmRlNTQzNDE3YmE3YTVkYjNhN2JmMDAxZWQ0NjUxZSIsInZlcnNpb24iOjF9.YEr6OhqOL7QnqYqjUTQFMdkgU_uS1-vVnkJtn_-1UwSoX754UV_bL9S9KSH3DX4m5QFoRXdZxfeOocm1JbzaCA"}, {"type": "f1", "value": 0.9417243188138998, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzIyMmViNTQ3ZGU0M2I5ZmRjOGI1OWMwZGEwYmE5OGU5YTZlZTkzZjdkOTQ4YzJmOTc2MDliMDY4NDQ1NGRlNyIsInZlcnNpb24iOjF9.p05MGHTfHTAzp4u-qfiIn6Zmh5c3TW_uwjXWgbb982pL_oCILQb6jFXqhPpWXL321fPye7qaUVbGhcTJd8sdCA"}, {"type": "loss", "value": 0.16342754662036896, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgxMDc4M2IxYjhkNjRhZmYyNzY1MTNkNzhmYjk2NmU1NjFiOTk1NDIzNzI1ZGU3MDYyYjQ2YmQ1NTI2N2NhMyIsInZlcnNpb24iOjF9.Zuf0nzn8XdvwRChKtE9CwJ0pgpc6Zey6oTR3jRiSkvNY2sNbo2bvAgFimGzgGYkDvRvYkTCXzCyxdb27l3QnAg"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,560 |
gokulsrinivasagan/distilbert_lda_5_v1_book_cola
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_5_v1_book",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_5_v1_book",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-09T15:43:54Z |
2024-12-09T15:45:24+00:00
| 4 | 0 |
---
base_model: gokulsrinivasagan/distilbert_lda_5_v1_book
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- matthews_correlation
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert_lda_5_v1_book_cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.4068166329179125
name: Matthews Correlation
- type: accuracy
value: 0.7660594582557678
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_5_v1_book_cola
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_5_v1_book](https://huggingface.co/gokulsrinivasagan/distilbert_lda_5_v1_book) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5058
- Matthews Correlation: 0.4068
- Accuracy: 0.7661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.5951 | 1.0 | 34 | 0.5446 | 0.2774 | 0.7277 |
| 0.4788 | 2.0 | 68 | 0.5058 | 0.4068 | 0.7661 |
| 0.3438 | 3.0 | 102 | 0.5465 | 0.4232 | 0.7728 |
| 0.2469 | 4.0 | 136 | 0.5831 | 0.4440 | 0.7776 |
| 0.171 | 5.0 | 170 | 0.6744 | 0.4526 | 0.7795 |
| 0.1344 | 6.0 | 204 | 0.7193 | 0.4712 | 0.7862 |
| 0.1026 | 7.0 | 238 | 0.8284 | 0.4791 | 0.7900 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.2.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_5_v1_book_cola
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_5_v1_book](https://huggingface.co/gokulsrinivasagan/distilbert_lda_5_v1_book) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5058
- Matthews Correlation: 0.4068
- Accuracy: 0.7661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.5951 | 1.0 | 34 | 0.5446 | 0.2774 | 0.7277 |
| 0.4788 | 2.0 | 68 | 0.5058 | 0.4068 | 0.7661 |
| 0.3438 | 3.0 | 102 | 0.5465 | 0.4232 | 0.7728 |
| 0.2469 | 4.0 | 136 | 0.5831 | 0.4440 | 0.7776 |
| 0.171 | 5.0 | 170 | 0.6744 | 0.4526 | 0.7795 |
| 0.1344 | 6.0 | 204 | 0.7193 | 0.4712 | 0.7862 |
| 0.1026 | 7.0 | 238 | 0.8284 | 0.4791 | 0.7900 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.2.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
{"base_model": "gokulsrinivasagan/distilbert_lda_5_v1_book", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["matthews_correlation", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_lda_5_v1_book_cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.4068166329179125, "name": "Matthews Correlation"}, {"type": "accuracy", "value": 0.7660594582557678, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,561 |
iaross/cm_bert
|
iaross
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-12T19:52:42Z |
2024-07-12T19:52:56+00:00
| 10 | 0 |
---
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
widget: []
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("iaross/cm_bert")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate:
- Datasets:
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("iaross/cm_bert")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate:
- Datasets:
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction"], "widget": []}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,562 |
sphyuung/marian-finetuned-kde4-en-to-fr
|
sphyuung
|
translation
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-16T19:45:14Z |
2024-11-26T00:21:22+00:00
| 11 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
library_name: transformers
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 43,563 |
SEBIS/code_trans_t5_large_code_documentation_generation_ruby_transfer_learning_finetune
|
SEBIS
|
summarization
|
[
"transformers",
"pytorch",
"jax",
"t5",
"feature-extraction",
"summarization",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-06-23T08:03:43+00:00
| 15 | 0 |
---
tags:
- summarization
widget:
- text: def add ( severity , progname , & block ) return true if io . nil? || severity
< level message = format_message ( severity , progname , yield ) MUTEX . synchronize
{ io . write ( message ) } true end
---
# CodeTrans model for code documentation generation ruby
Pretrained model on programming language ruby using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the ruby function/method.
## Intended uses & limitations
The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/ruby/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
| null |
Non_BioNLP
|
# CodeTrans model for code documentation generation ruby
Pretrained model on programming language ruby using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the ruby function/method.
## Intended uses & limitations
The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/ruby/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
{"tags": ["summarization"], "widget": [{"text": "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"}]}
|
task
|
[
"SUMMARIZATION"
] | 43,564 |
msclar/referee-control_iter-5
|
msclar
|
text-generation
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-10-20T22:15:34Z |
2023-03-16T04:17:28+00:00
| 44 | 0 |
---
{}
---
# [Refer]ence-fr[ee] sentence summarization
See [Github repo](https://github.com/msclar/referee) for all details. **DO NOT USE HOSTED INFERENCE API**. Instead, use the appropriate `src/generated_summaries_*.py` script, that specifies the expected delimiters and decoding params.
## Paper citation
If you used this model for your experiments or found it helpful, consider citing the following paper:
```
@inproceedings{sclar-etal-2022-referee,
title = "Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation",
author = "Sclar, Melanie and
West, Peter and
Kumar, Sachin and
Tsvetkov, Yulia and
Choi, Yejin",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.655",
pages = "9649--9668",
abstract = "We present Referee, a novel framework for sentence summarization that can be trained reference-free (i.e., requiring no gold summaries for supervision), while allowing direct control for compression ratio. Our work is the first to demonstrate that reference-free, controlled sentence summarization is feasible via the conceptual framework of Symbolic Knowledge Distillation (West et al., 2022), where latent knowledge in pre-trained language models is distilled via explicit examples sampled from the teacher models, further purified with three types of filters: length, fidelity, and Information Bottleneck. Moreover, we uniquely propose iterative distillation of knowledge, where student models from the previous iteration of distillation serve as teacher models in the next iteration. Starting off from a relatively modest set of GPT3-generated summaries, we demonstrate how iterative knowledge distillation can lead to considerably smaller, but better summarizers with sharper controllability. A useful by-product of this iterative distillation process is a high-quality dataset of sentence-summary pairs with varying degrees of compression ratios. Empirical results demonstrate that the final student models vastly outperform the much larger GPT3-Instruct model in terms of the controllability of compression ratios, without compromising the quality of resulting summarization.",
}
```
| null |
Non_BioNLP
|
# [Refer]ence-fr[ee] sentence summarization
See [Github repo](https://github.com/msclar/referee) for all details. **DO NOT USE HOSTED INFERENCE API**. Instead, use the appropriate `src/generated_summaries_*.py` script, that specifies the expected delimiters and decoding params.
## Paper citation
If you used this model for your experiments or found it helpful, consider citing the following paper:
```
@inproceedings{sclar-etal-2022-referee,
title = "Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation",
author = "Sclar, Melanie and
West, Peter and
Kumar, Sachin and
Tsvetkov, Yulia and
Choi, Yejin",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.655",
pages = "9649--9668",
abstract = "We present Referee, a novel framework for sentence summarization that can be trained reference-free (i.e., requiring no gold summaries for supervision), while allowing direct control for compression ratio. Our work is the first to demonstrate that reference-free, controlled sentence summarization is feasible via the conceptual framework of Symbolic Knowledge Distillation (West et al., 2022), where latent knowledge in pre-trained language models is distilled via explicit examples sampled from the teacher models, further purified with three types of filters: length, fidelity, and Information Bottleneck. Moreover, we uniquely propose iterative distillation of knowledge, where student models from the previous iteration of distillation serve as teacher models in the next iteration. Starting off from a relatively modest set of GPT3-generated summaries, we demonstrate how iterative knowledge distillation can lead to considerably smaller, but better summarizers with sharper controllability. A useful by-product of this iterative distillation process is a high-quality dataset of sentence-summary pairs with varying degrees of compression ratios. Empirical results demonstrate that the final student models vastly outperform the much larger GPT3-Instruct model in terms of the controllability of compression ratios, without compromising the quality of resulting summarization.",
}
```
|
{}
|
task
|
[
"SUMMARIZATION"
] | 43,565 |
mradermacher/levantine-translation-qwen2.5-1.5b-GGUF
|
mradermacher
| null |
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:Raniahossam33/levantine-translation-qwen2.5-1.5b",
"base_model:quantized:Raniahossam33/levantine-translation-qwen2.5-1.5b",
"endpoints_compatible",
"region:us",
"conversational"
] | 2025-01-12T13:31:44Z |
2025-01-12T14:28:03+00:00
| 86 | 0 |
---
base_model: Raniahossam33/levantine-translation-qwen2.5-1.5b
language:
- en
library_name: transformers
model_name: levantine-translation-qwen2.5-1.5b
tags:
- generated_from_trainer
- trl
- sft
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Raniahossam33/levantine-translation-qwen2.5-1.5b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| null |
Non_BioNLP
|
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Raniahossam33/levantine-translation-qwen2.5-1.5b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/levantine-translation-qwen2.5-1.5b-GGUF/resolve/main/levantine-translation-qwen2.5-1.5b.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"base_model": "Raniahossam33/levantine-translation-qwen2.5-1.5b", "language": ["en"], "library_name": "transformers", "model_name": "levantine-translation-qwen2.5-1.5b", "tags": ["generated_from_trainer", "trl", "sft"], "quantized_by": "mradermacher"}
|
task
|
[
"TRANSLATION"
] | 43,566 |
taegyun/distilbert-base-uncased-finetuned-emotion
|
taegyun
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-01T22:53:50Z |
2023-05-01T23:09:25+00:00
| 8 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.922
name: Accuracy
- type: f1
value: 0.9221186592426542
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.922
- F1: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3273 | 0.9025 | 0.8984 |
| No log | 2.0 | 500 | 0.2225 | 0.922 | 0.9221 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.11.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.922
- F1: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3273 | 0.9025 | 0.8984 |
| No log | 2.0 | 500 | 0.2225 | 0.922 | 0.9221 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.11.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.922, "name": "Accuracy"}, {"type": "f1", "value": 0.9221186592426542, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,567 |
songhieng/khmer-mt5-summarization-1024tk-V3
|
songhieng
|
summarization
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"km",
"dataset:kimleang123/fresh-news",
"dataset:kimleang123/khmer-text-dataset",
"dataset:kimleang123/rfi_news",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-16T08:31:46Z |
2025-02-21T14:44:09+00:00
| 67 | 1 |
---
base_model:
- google/mt5-small
datasets:
- kimleang123/fresh-news
- kimleang123/khmer-text-dataset
- kimleang123/rfi_news
language:
- km
library_name: transformers
license: apache-2.0
metrics:
- rouge
pipeline_tag: summarization
new_version: songhieng/khmer-mt5-summarization-1024tk-V2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Khmer mT5 Summarization Model (1024 Tokens) - V2
## Introduction
This repository contains an improved version of the Khmer mT5 summarization model, **songhieng/khmer-mt5-summarization-1024tk-V2**. This version has been trained on an expanded dataset, including data from [kimleang123/rfi_news](https://huggingface.co/datasets/kimleang123/rfi_news), allowing for improved summarization performance on Khmer text.
## Model Details
- **Base Model:** `google/mt5-small`
- **Fine-tuned for:** Khmer text summarization with extended input length
- **Training Dataset:** `kimleang123/rfi_news` + previous dataset
- **Framework:** Hugging Face `transformers`
- **Task Type:** Sequence-to-Sequence (Seq2Seq)
- **Input:** Khmer text (articles, paragraphs, or documents) up to 1024 tokens
- **Output:** Summarized Khmer text
- **Training Hardware:** GPU (Tesla T4)
- **Evaluation Metric:** ROUGE Score
## Installation & Setup
### 1️⃣ Install Dependencies
Ensure you have `transformers`, `torch`, and `datasets` installed:
```bash
pip install transformers torch datasets
```
### 2️⃣ Load the Model
To load and use the fine-tuned model:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "songhieng/khmer-mt5-summarization-1024tk-V2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
```
## How to Use
### 1️⃣ Using Python Code
```python
def summarize_khmer(text, max_length=150):
input_text = f"summarize: {text}"
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=1024)
summary_ids = model.generate(**inputs, max_length=max_length, num_beams=5, length_penalty=2.0, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
khmer_text = "កម្ពុជាមានប្រជាជនប្រមាណ ១៦ លាននាក់ ហើយវាគឺជាប្រទេសនៅតំបន់អាស៊ីអាគ្នេយ៍។"
summary = summarize_khmer(khmer_text)
print("Khmer Summary:", summary)
```
### 2️⃣ Using Hugging Face Pipeline
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="songhieng/khmer-mt5-summarization-1024tk-V2")
khmer_text = "កម្ពុជាមានប្រជាជនប្រមាណ ១៦ លាននាក់ ហើយវាគឺជាប្រទេសនៅតំបន់អាស៊ីអាគ្នេយ៍។"
summary = summarizer(khmer_text, max_length=150, min_length=30, do_sample=False)
print("Khmer Summary:", summary[0]['summary_text'])
```
### 3️⃣ Deploy as an API using FastAPI
```python
from fastapi import FastAPI
app = FastAPI()
@app.post("/summarize/")
def summarize(text: str):
inputs = tokenizer(f"summarize: {text}", return_tensors="pt", truncation=True, max_length=1024)
summary_ids = model.generate(**inputs, max_length=150, num_beams=5, length_penalty=2.0, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return {"summary": summary}
# Run with: uvicorn filename:app --reload
```
## Model Evaluation
The model was evaluated using **ROUGE scores**, which measure the similarity between the generated summaries and the reference summaries.
```python
from datasets import load_metric
rouge = load_metric("rouge")
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
decoded_preds = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
return rouge.compute(predictions=decoded_preds, references=decoded_labels)
trainer.evaluate()
```
## Saving & Uploading the Model
After fine-tuning, the model can be uploaded to the Hugging Face Hub:
```python
model.push_to_hub("songhieng/khmer-mt5-summarization-1024tk-V2")
tokenizer.push_to_hub("songhieng/khmer-mt5-summarization-1024tk-V2")
```
To download it later:
```python
model = AutoModelForSeq2SeqLM.from_pretrained("songhieng/khmer-mt5-summarization-1024tk-V2")
tokenizer = AutoTokenizer.from_pretrained("songhieng/khmer-mt5-summarization-1024tk-V2")
```
## Summary
| **Feature** | **Details** |
|-----------------------|-------------------------------------------------|
| **Base Model** | `google/mt5-small` |
| **Task** | Summarization |
| **Language** | Khmer (ខ្មែរ) |
| **Dataset** | `kimleang123/rfi_news` + previous dataset |
| **Framework** | Hugging Face Transformers |
| **Evaluation Metric** | ROUGE Score |
| **Deployment** | Hugging Face Model Hub, API (FastAPI), Python Code |
## Contributing
Contributions are welcome! Feel free to **open issues or submit pull requests** if you have any improvements or suggestions.
### Contact
If you have any questions, feel free to reach out via [Hugging Face Discussions](https://huggingface.co/) or create an issue in the repository.
**Built for the Khmer NLP Community**
| null |
Non_BioNLP
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Khmer mT5 Summarization Model (1024 Tokens) - V2
## Introduction
This repository contains an improved version of the Khmer mT5 summarization model, **songhieng/khmer-mt5-summarization-1024tk-V2**. This version has been trained on an expanded dataset, including data from [kimleang123/rfi_news](https://huggingface.co/datasets/kimleang123/rfi_news), allowing for improved summarization performance on Khmer text.
## Model Details
- **Base Model:** `google/mt5-small`
- **Fine-tuned for:** Khmer text summarization with extended input length
- **Training Dataset:** `kimleang123/rfi_news` + previous dataset
- **Framework:** Hugging Face `transformers`
- **Task Type:** Sequence-to-Sequence (Seq2Seq)
- **Input:** Khmer text (articles, paragraphs, or documents) up to 1024 tokens
- **Output:** Summarized Khmer text
- **Training Hardware:** GPU (Tesla T4)
- **Evaluation Metric:** ROUGE Score
## Installation & Setup
### 1️⃣ Install Dependencies
Ensure you have `transformers`, `torch`, and `datasets` installed:
```bash
pip install transformers torch datasets
```
### 2️⃣ Load the Model
To load and use the fine-tuned model:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "songhieng/khmer-mt5-summarization-1024tk-V2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
```
## How to Use
### 1️⃣ Using Python Code
```python
def summarize_khmer(text, max_length=150):
input_text = f"summarize: {text}"
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=1024)
summary_ids = model.generate(**inputs, max_length=max_length, num_beams=5, length_penalty=2.0, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
khmer_text = "កម្ពុជាមានប្រជាជនប្រមាណ ១៦ លាននាក់ ហើយវាគឺជាប្រទេសនៅតំបន់អាស៊ីអាគ្នេយ៍។"
summary = summarize_khmer(khmer_text)
print("Khmer Summary:", summary)
```
### 2️⃣ Using Hugging Face Pipeline
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="songhieng/khmer-mt5-summarization-1024tk-V2")
khmer_text = "កម្ពុជាមានប្រជាជនប្រមាណ ១៦ លាននាក់ ហើយវាគឺជាប្រទេសនៅតំបន់អាស៊ីអាគ្នេយ៍។"
summary = summarizer(khmer_text, max_length=150, min_length=30, do_sample=False)
print("Khmer Summary:", summary[0]['summary_text'])
```
### 3️⃣ Deploy as an API using FastAPI
```python
from fastapi import FastAPI
app = FastAPI()
@app.post("/summarize/")
def summarize(text: str):
inputs = tokenizer(f"summarize: {text}", return_tensors="pt", truncation=True, max_length=1024)
summary_ids = model.generate(**inputs, max_length=150, num_beams=5, length_penalty=2.0, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return {"summary": summary}
# Run with: uvicorn filename:app --reload
```
## Model Evaluation
The model was evaluated using **ROUGE scores**, which measure the similarity between the generated summaries and the reference summaries.
```python
from datasets import load_metric
rouge = load_metric("rouge")
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
decoded_preds = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
return rouge.compute(predictions=decoded_preds, references=decoded_labels)
trainer.evaluate()
```
## Saving & Uploading the Model
After fine-tuning, the model can be uploaded to the Hugging Face Hub:
```python
model.push_to_hub("songhieng/khmer-mt5-summarization-1024tk-V2")
tokenizer.push_to_hub("songhieng/khmer-mt5-summarization-1024tk-V2")
```
To download it later:
```python
model = AutoModelForSeq2SeqLM.from_pretrained("songhieng/khmer-mt5-summarization-1024tk-V2")
tokenizer = AutoTokenizer.from_pretrained("songhieng/khmer-mt5-summarization-1024tk-V2")
```
## Summary
| **Feature** | **Details** |
|-----------------------|-------------------------------------------------|
| **Base Model** | `google/mt5-small` |
| **Task** | Summarization |
| **Language** | Khmer (ខ្មែរ) |
| **Dataset** | `kimleang123/rfi_news` + previous dataset |
| **Framework** | Hugging Face Transformers |
| **Evaluation Metric** | ROUGE Score |
| **Deployment** | Hugging Face Model Hub, API (FastAPI), Python Code |
## Contributing
Contributions are welcome! Feel free to **open issues or submit pull requests** if you have any improvements or suggestions.
### Contact
If you have any questions, feel free to reach out via [Hugging Face Discussions](https://huggingface.co/) or create an issue in the repository.
**Built for the Khmer NLP Community**
|
{"base_model": ["google/mt5-small"], "datasets": ["kimleang123/fresh-news", "kimleang123/khmer-text-dataset", "kimleang123/rfi_news"], "language": ["km"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "pipeline_tag": "summarization", "new_version": "songhieng/khmer-mt5-summarization-1024tk-V2"}
|
task
|
[
"SUMMARIZATION"
] | 43,568 |
haonan-li/bactrian-ne-bloom-7b1-lora
|
haonan-li
| null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | 2023-06-13T13:27:35Z |
2023-06-13T13:27:48+00:00
| 0 | 0 |
---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Nepali.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-ne-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Nepali.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-ne-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"license": "mit"}
|
task
|
[
"TRANSLATION"
] | 43,569 |
elyadenysova/Emotion_Albert
|
elyadenysova
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-03-07T05:00:21Z |
2024-03-07T05:00:25+00:00
| 6 | 0 |
---
base_model: albert-base-v2
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: Emotion_Albert
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9295
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Emotion_Albert
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1537
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3094 | 1.0 | 1000 | 0.2669 | 0.9145 |
| 0.1776 | 2.0 | 2000 | 0.2013 | 0.928 |
| 0.1129 | 3.0 | 3000 | 0.1541 | 0.936 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Emotion_Albert
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1537
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3094 | 1.0 | 1000 | 0.2669 | 0.9145 |
| 0.1776 | 2.0 | 2000 | 0.2013 | 0.928 |
| 0.1129 | 3.0 | 3000 | 0.1541 | 0.936 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"base_model": "albert-base-v2", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "Emotion_Albert", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9295, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,570 |
mrapacz/interlinear-en-philta-emb-concat-normalized-ob
|
mrapacz
|
text2text-generation
|
[
"transformers",
"pytorch",
"morph-t5-concat",
"text2text-generation",
"en",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-08T12:28:14Z |
2025-02-21T21:31:14+00:00
| 9 | 0 |
---
base_model:
- PhilTa
datasets:
- mrapacz/greek-interlinear-translations
language:
- en
library_name: transformers
license: cc-by-sa-4.0
metrics:
- bleu
---
# Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
## Model Details
### Model Description
- **Developed By:** Maciej Rapacz, AGH University of Kraków
- **Model Type:** MorphT5ConcatForConditionalGeneration
- **Base Model:** PhilTa
- **Tokenizer:** PhilTa
- **Language(s):** Ancient Greek (source) → English (target)
- **License:** CC BY-NC-SA 4.0
- **Tag Set:** OB (Oblubienica)
- **Text Preprocessing:** Normalized
- **Morphological Encoding:** emb-concat
### Model Performance
- **BLEU Score:** 40.76
- **SemScore:** 0.78
### Model Sources
- **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
- **Paper:** https://aclanthology.org/2025.loreslm-1.11/
## Usage Example
> **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package:
> ```bash
> pip install morpht5
> ```
```python
>>> from morpht5 import MorphT5ConcatForConditionalGeneration, MorphT5Tokenizer
>>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει']
>>> tags = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg']
>>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-en-philta-emb-concat-normalized-ob")
>>> inputs = tokenizer(
text=text,
morph_tags=tags,
return_tensors="pt"
)
>>> model = MorphT5ConcatForConditionalGeneration.from_pretrained("mrapacz/interlinear-en-philta-emb-concat-normalized-ob")
>>> outputs = model.generate(
**inputs,
max_new_tokens=100,
early_stopping=True,
)
>>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True)
>>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ")
>>> decoded
'says | to him | - | jesus | arise | take up | the | mat | of you | and | walk'
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rapacz-smywinski-pohl-2025-low,
title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
author = "Rapacz, Maciej and
Smywi{\'n}ski-Pohl, Aleksander",
editor = "Hettiarachchi, Hansi and
Ranasinghe, Tharindu and
Rayson, Paul and
Mitkov, Ruslan and
Gaber, Mohamed and
Premasiri, Damith and
Tan, Fiona Anting and
Uyangodage, Lasitha",
booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
month = jan,
year = "2025",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.loreslm-1.11/",
pages = "145--165",
abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
}
```
| null |
Non_BioNLP
|
# Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
## Model Details
### Model Description
- **Developed By:** Maciej Rapacz, AGH University of Kraków
- **Model Type:** MorphT5ConcatForConditionalGeneration
- **Base Model:** PhilTa
- **Tokenizer:** PhilTa
- **Language(s):** Ancient Greek (source) → English (target)
- **License:** CC BY-NC-SA 4.0
- **Tag Set:** OB (Oblubienica)
- **Text Preprocessing:** Normalized
- **Morphological Encoding:** emb-concat
### Model Performance
- **BLEU Score:** 40.76
- **SemScore:** 0.78
### Model Sources
- **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
- **Paper:** https://aclanthology.org/2025.loreslm-1.11/
## Usage Example
> **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package:
> ```bash
> pip install morpht5
> ```
```python
>>> from morpht5 import MorphT5ConcatForConditionalGeneration, MorphT5Tokenizer
>>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει']
>>> tags = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg']
>>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-en-philta-emb-concat-normalized-ob")
>>> inputs = tokenizer(
text=text,
morph_tags=tags,
return_tensors="pt"
)
>>> model = MorphT5ConcatForConditionalGeneration.from_pretrained("mrapacz/interlinear-en-philta-emb-concat-normalized-ob")
>>> outputs = model.generate(
**inputs,
max_new_tokens=100,
early_stopping=True,
)
>>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True)
>>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ")
>>> decoded
'says | to him | - | jesus | arise | take up | the | mat | of you | and | walk'
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rapacz-smywinski-pohl-2025-low,
title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
author = "Rapacz, Maciej and
Smywi{\'n}ski-Pohl, Aleksander",
editor = "Hettiarachchi, Hansi and
Ranasinghe, Tharindu and
Rayson, Paul and
Mitkov, Ruslan and
Gaber, Mohamed and
Premasiri, Damith and
Tan, Fiona Anting and
Uyangodage, Lasitha",
booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
month = jan,
year = "2025",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.loreslm-1.11/",
pages = "145--165",
abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
}
```
|
{"base_model": ["PhilTa"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["en"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]}
|
task
|
[
"TRANSLATION"
] | 43,571 |
apriandito/tipe-tweet
|
apriandito
|
text-classification
|
[
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-classification",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"region:us"
] | 2024-08-04T16:23:24Z |
2024-08-04T16:24:33+00:00
| 5 | 0 |
---
base_model: indolem/indobert-base-uncased
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.45924699306488037
f1_macro: 0.8898961097467847
f1_micro: 0.889763779527559
f1_weighted: 0.8899204214771557
precision_macro: 0.8906384102462533
precision_micro: 0.889763779527559
precision_weighted: 0.8907447175163711
recall_macro: 0.8898225957049486
recall_micro: 0.889763779527559
recall_weighted: 0.889763779527559
accuracy: 0.889763779527559
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.45924699306488037
f1_macro: 0.8898961097467847
f1_micro: 0.889763779527559
f1_weighted: 0.8899204214771557
precision_macro: 0.8906384102462533
precision_micro: 0.889763779527559
precision_weighted: 0.8907447175163711
recall_macro: 0.8898225957049486
recall_micro: 0.889763779527559
recall_weighted: 0.889763779527559
accuracy: 0.889763779527559
|
{"base_model": "indolem/indobert-base-uncased", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,572 |
ishajo/autotrain-beproj_meeting_summarization_usingt5-49444119396
|
ishajo
|
summarization
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:ishajo/autotrain-data-beproj_meeting_summarization_usingt5",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-04-14T11:00:46Z |
2023-04-14T11:05:41+00:00
| 22 | 0 |
---
datasets:
- ishajo/autotrain-data-beproj_meeting_summarization_usingt5
language:
- en
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 1.917540145828255
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 49444119396
- CO2 Emissions (in grams): 1.9175
## Validation Metrics
- Loss: 1.830
- Rouge1: 24.863
- Rouge2: 12.063
- RougeL: 20.488
- RougeLsum: 23.375
- Gen Len: 19.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ishajo/autotrain-beproj_meeting_summarization_usingt5-49444119396
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 49444119396
- CO2 Emissions (in grams): 1.9175
## Validation Metrics
- Loss: 1.830
- Rouge1: 24.863
- Rouge2: 12.063
- RougeL: 20.488
- RougeLsum: 23.375
- Gen Len: 19.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ishajo/autotrain-beproj_meeting_summarization_usingt5-49444119396
```
|
{"datasets": ["ishajo/autotrain-data-beproj_meeting_summarization_usingt5"], "language": ["en"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 1.917540145828255}}
|
task
|
[
"SUMMARIZATION"
] | 43,573 |
HPLT/translate-gl-en-v1.0-hplt_opus
|
HPLT
|
translation
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"gl",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"region:us"
] | 2024-02-27T11:13:37Z |
2024-03-14T00:56:42+00:00
| 18 | 0 |
---
language:
- gl
- en
license: cc-by-4.0
tags:
- translation
inference: false
---
## HPLT MT release v1.0
This repository contains the translation model for Galician-English trained with OPUS and HPLT data. The model is available in both Marian and Hugging Face formats.
### Model Info
* Source language: Galician
* Target language: English
* Dataset: OPUS and HPLT data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
* Cleaning: We used [OpusCleaner](https://github.com/hplt-project/OpusCleaner) with a set of basic rules. Details can be found in the filter files [here](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/en-gl/raw/v2).
You can check out our [deliverable report](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0), and [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format. We have also converted the model into the Hugging Face format so it is compatible with `transformers`.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.gl-en.spm` from this repository.
#### Using transformers
We have also converted this model to the Hugging Face format and you can get started with the script below. **Note** that due a [known issue](https://github.com/huggingface/transformers/issues/26216) in weight conversion, the checkpoint cannot work with transformer versions <4.26 or >4.30. We tested and suggest `pip install transformers==4.28`.
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/translate-gl-en-v1.0-hplt_opus")
model = AutoModelForSeq2SeqLM.from_pretrained("HPLT/translate-gl-en-v1.0-hplt_opus")
inputs = ["Input goes here.", "Make sure the language is right."]
batch_tokenized = tokenizer(inputs, return_tensors="pt", padding=True)
model_output = model.generate(
**batch_tokenized, num_beams=6, max_new_tokens=512
)
batch_detokenized = tokenizer.batch_decode(
model_output,
skip_special_tokens=True,
)
print(batch_detokenized)
```
### Benchmarks
When decoded using Marian, the model has the following test scores.
| Test set | BLEU | chrF++ | COMET22 |
| -------------------------------------- | ---- | ----- | ----- |
| FLORES200 | 38.0 | 61.9 | 0.8638 |
| NTREX | 34.8 | 59.0 | 0.8416 |
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
Brought to you by researchers from the University of Edinburgh and Charles University in Prague with support from the whole HPLT consortium.
| null |
Non_BioNLP
|
## HPLT MT release v1.0
This repository contains the translation model for Galician-English trained with OPUS and HPLT data. The model is available in both Marian and Hugging Face formats.
### Model Info
* Source language: Galician
* Target language: English
* Dataset: OPUS and HPLT data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
* Cleaning: We used [OpusCleaner](https://github.com/hplt-project/OpusCleaner) with a set of basic rules. Details can be found in the filter files [here](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/en-gl/raw/v2).
You can check out our [deliverable report](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0), and [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format. We have also converted the model into the Hugging Face format so it is compatible with `transformers`.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.gl-en.spm` from this repository.
#### Using transformers
We have also converted this model to the Hugging Face format and you can get started with the script below. **Note** that due a [known issue](https://github.com/huggingface/transformers/issues/26216) in weight conversion, the checkpoint cannot work with transformer versions <4.26 or >4.30. We tested and suggest `pip install transformers==4.28`.
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/translate-gl-en-v1.0-hplt_opus")
model = AutoModelForSeq2SeqLM.from_pretrained("HPLT/translate-gl-en-v1.0-hplt_opus")
inputs = ["Input goes here.", "Make sure the language is right."]
batch_tokenized = tokenizer(inputs, return_tensors="pt", padding=True)
model_output = model.generate(
**batch_tokenized, num_beams=6, max_new_tokens=512
)
batch_detokenized = tokenizer.batch_decode(
model_output,
skip_special_tokens=True,
)
print(batch_detokenized)
```
### Benchmarks
When decoded using Marian, the model has the following test scores.
| Test set | BLEU | chrF++ | COMET22 |
| -------------------------------------- | ---- | ----- | ----- |
| FLORES200 | 38.0 | 61.9 | 0.8638 |
| NTREX | 34.8 | 59.0 | 0.8416 |
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
Brought to you by researchers from the University of Edinburgh and Charles University in Prague with support from the whole HPLT consortium.
|
{"language": ["gl", "en"], "license": "cc-by-4.0", "tags": ["translation"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 43,574 |
zihoo/all-MiniLM-L6-v2-WMNLI-PN
|
zihoo
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1600",
"loss:SoftmaxLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-30T22:46:30Z |
2025-01-30T22:46:44+00:00
| 7 | 0 |
---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1600
- loss:SoftmaxLoss
widget:
- source_sentence: I remain conscious of those around me in the office.
sentences:
- I sense when I become irritated during my workday.
- I efficiently multitask without losing concentration.
- I can attentively engage with my coworkers easily.
- source_sentence: Denying team tensions blocks problem-solving for me.
sentences:
- I seamlessly transition focus among duties as needed.
- External disruptions throw off my concentration swiftly.
- I resist seeing unexpected changes as normal at work.
- source_sentence: Voices around sidetrack me from focusing on my tasks.
sentences:
- Dismissing work failures instead of analyzing is habitual.
- Unforeseen job changes are greeted with readiness and ease.
- I can attentively engage with my coworkers easily.
- source_sentence: Others' errors are embraced with tolerance and understanding
sentences:
- Half-listening is common during casual work conversations.
- I acknowledge how office noises impact my focus.
- I acknowledge how office noises impact my focus.
- source_sentence: Self-doubt often aligns judgment reliability post-failure.
sentences:
- Denying team tensions blocks problem-solving for me.
- Voices around sidetrack me from focusing on my tasks.
- I'm effective in prioritizing focus during demanding days.
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("zihoo/all-MiniLM-L6-v2-WMNLI-PN")
# Run inference
sentences = [
'Self-doubt often aligns judgment reliability post-failure.',
'Voices around sidetrack me from focusing on my tasks.',
'Denying team tensions blocks problem-solving for me.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,600 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 9 tokens</li><li>mean: 12.88 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.86 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>0: ~18.10%</li><li>1: ~67.90%</li><li>2: ~14.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------|:--------------------------------------------------------------------------|:---------------|
| <code>Sustaining attention during meetings isn't challenging for m</code> | <code>I am mindful of my emotional state in work environments.</code> | <code>1</code> |
| <code>Self-doubt often aligns judgment reliability post-failure.</code> | <code>I quickly overlook how my workstation affects me physically.</code> | <code>1</code> |
| <code>I can attentively engage with my coworkers easily.</code> | <code>I am rarely aware of subtle changes in my workplace.</code> | <code>1</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 400 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 400 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 9 tokens</li><li>mean: 12.98 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.93 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>0: ~18.00%</li><li>1: ~69.00%</li><li>2: ~13.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------|:--------------------------------------------------------------------------|:---------------|
| <code>I resist seeing unexpected changes as normal at work.</code> | <code>I am rarely aware of subtle changes in my workplace.</code> | <code>1</code> |
| <code>External disruptions throw off my concentration swiftly.</code> | <code>Self-doubt often aligns judgment reliability post-failure.</code> | <code>1</code> |
| <code>Navigating past task interruptions isn't an issue for my foc</code> | <code>I quickly overlook how my workstation affects me physically.</code> | <code>1</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 3e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.01
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 2.0 | 100 | 0.9471 | 0.8490 |
| 4.0 | 200 | 0.8112 | 0.7609 |
| 6.0 | 300 | 0.7527 | 0.7153 |
| 8.0 | 400 | 0.7218 | 0.6939 |
| 10.0 | 500 | 0.7086 | 0.6871 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("zihoo/all-MiniLM-L6-v2-WMNLI-PN")
# Run inference
sentences = [
'Self-doubt often aligns judgment reliability post-failure.',
'Voices around sidetrack me from focusing on my tasks.',
'Denying team tensions blocks problem-solving for me.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,600 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 9 tokens</li><li>mean: 12.88 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.86 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>0: ~18.10%</li><li>1: ~67.90%</li><li>2: ~14.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------|:--------------------------------------------------------------------------|:---------------|
| <code>Sustaining attention during meetings isn't challenging for m</code> | <code>I am mindful of my emotional state in work environments.</code> | <code>1</code> |
| <code>Self-doubt often aligns judgment reliability post-failure.</code> | <code>I quickly overlook how my workstation affects me physically.</code> | <code>1</code> |
| <code>I can attentively engage with my coworkers easily.</code> | <code>I am rarely aware of subtle changes in my workplace.</code> | <code>1</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 400 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 400 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 9 tokens</li><li>mean: 12.98 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.93 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>0: ~18.00%</li><li>1: ~69.00%</li><li>2: ~13.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------|:--------------------------------------------------------------------------|:---------------|
| <code>I resist seeing unexpected changes as normal at work.</code> | <code>I am rarely aware of subtle changes in my workplace.</code> | <code>1</code> |
| <code>External disruptions throw off my concentration swiftly.</code> | <code>Self-doubt often aligns judgment reliability post-failure.</code> | <code>1</code> |
| <code>Navigating past task interruptions isn't an issue for my foc</code> | <code>I quickly overlook how my workstation affects me physically.</code> | <code>1</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 3e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.01
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 2.0 | 100 | 0.9471 | 0.8490 |
| 4.0 | 200 | 0.8112 | 0.7609 |
| 6.0 | 300 | 0.7527 | 0.7153 |
| 8.0 | 400 | 0.7218 | 0.6939 |
| 10.0 | 500 | 0.7086 | 0.6871 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1600", "loss:SoftmaxLoss"], "widget": [{"source_sentence": "I remain conscious of those around me in the office.", "sentences": ["I sense when I become irritated during my workday.", "I efficiently multitask without losing concentration.", "I can attentively engage with my coworkers easily."]}, {"source_sentence": "Denying team tensions blocks problem-solving for me.", "sentences": ["I seamlessly transition focus among duties as needed.", "External disruptions throw off my concentration swiftly.", "I resist seeing unexpected changes as normal at work."]}, {"source_sentence": "Voices around sidetrack me from focusing on my tasks.", "sentences": ["Dismissing work failures instead of analyzing is habitual.", "Unforeseen job changes are greeted with readiness and ease.", "I can attentively engage with my coworkers easily."]}, {"source_sentence": "Others' errors are embraced with tolerance and understanding", "sentences": ["Half-listening is common during casual work conversations.", "I acknowledge how office noises impact my focus.", "I acknowledge how office noises impact my focus."]}, {"source_sentence": "Self-doubt often aligns judgment reliability post-failure.", "sentences": ["Denying team tensions blocks problem-solving for me.", "Voices around sidetrack me from focusing on my tasks.", "I'm effective in prioritizing focus during demanding days."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,576 |
yooonsangbeom/bert-base-finetuned-ynat
|
yooonsangbeom
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-25T07:24:37Z |
2023-10-26T04:34:34+00:00
| 20 | 0 |
---
base_model: klue/bert-base
datasets:
- klue
license: cc-by-sa-4.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-base-finetuned-ynat
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: klue
type: klue
config: ynat
split: validation
args: ynat
metrics:
- type: accuracy
value: 0.8659273086636653
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3691
- Accuracy: 0.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 90 | 0.4090 | 0.8599 |
| No log | 2.0 | 180 | 0.3929 | 0.8578 |
| No log | 3.0 | 270 | 0.3703 | 0.8648 |
| No log | 4.0 | 360 | 0.3714 | 0.8631 |
| No log | 5.0 | 450 | 0.3691 | 0.8659 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3691
- Accuracy: 0.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 90 | 0.4090 | 0.8599 |
| No log | 2.0 | 180 | 0.3929 | 0.8578 |
| No log | 3.0 | 270 | 0.3703 | 0.8648 |
| No log | 4.0 | 360 | 0.3714 | 0.8631 |
| No log | 5.0 | 450 | 0.3691 | 0.8659 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.14.1
|
{"base_model": "klue/bert-base", "datasets": ["klue"], "license": "cc-by-sa-4.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-finetuned-ynat", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "klue", "type": "klue", "config": "ynat", "split": "validation", "args": "ynat"}, "metrics": [{"type": "accuracy", "value": 0.8659273086636653, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,577 |
vngrs-ai/VBART-Large-Title-Generation-from-News
|
vngrs-ai
|
text2text-generation
|
[
"transformers",
"tf",
"safetensors",
"mbart",
"text2text-generation",
"tr",
"dataset:vngrs-ai/vngrs-web-corpus",
"arxiv:2403.01308",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-03-12T13:10:29Z |
2024-03-23T08:37:11+00:00
| 15 | 0 |
---
datasets:
- vngrs-ai/vngrs-web-corpus
language:
- tr
library_name: transformers
license: cc-by-nc-sa-4.0
pipeline_tag: text2text-generation
inference:
parameters:
max_new_tokens: 32
arXiv: 2403.01308
---
# VBART Model Card
## Model Description
VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from scratch on a large scale. It was pre-trained by VNGRS in February 2023.
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for title generation from news body task.
- **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
- **Model type:** Transformer encoder-decoder based on mBART architecture
- **Language(s) (NLP):** Turkish
- **License:** CC BY-NC-SA 4.0
- **Finetuned from:** VBART-Large
- **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("vngrs-ai/VBART-Large-Title-Generation-from-News",
model_input_names=['input_ids', 'attention_mask'])
# Uncomment the device_map kwarg and delete the closing bracket to use model for inference on GPU
model = AutoModelForSeq2SeqLM.from_pretrained("vngrs-ai/VBART-Large-Title-Generation-from-News")#, device_map="auto")
input_text="..."
token_input = tokenizer(input_text, return_tensors="pt")#.to('cuda')
outputs = model.generate(**token_input)
print(tokenizer.decode(outputs[0]))
```
## Training Details
### Training Data
The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308).
The fine-tuning dataset is the Turkish sections of [MLSum](https://huggingface.co/datasets/mlsum), [TRNews](https://huggingface.co/datasets/batubayk/TR-News) and [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum) datasets.
### Limitations
This model is fine-tuned for title generation tasks. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.
### Training Procedure
Pre-trained for 30 days and for a total of 708B tokens. Finetuned for 25 epoch.
#### Hardware
- **GPUs**: 8 x Nvidia A100-80 GB
#### Software
- TensorFlow
#### Hyperparameters
##### Pretraining
- **Training regime:** fp16 mixed precision
- **Training objective**: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution λ=3.5, masking 30% of tokens)
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
- **Scheduler**: Custom scheduler from the original Transformers paper (20,000 warm-up steps)
- **Dropout**: 0.1 (dropped to 0.05 and then to 0 in the last 165k and 205k steps, respectively)
- **Initial Learning rate**: 5e-6
- **Training tokens**: 708B
##### Fine-tuning
- **Training regime:** fp16 mixed precision
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
- **Scheduler**: Linear decay scheduler
- **Dropout**: 0.1
- **Learning rate**: 5e-5
- **Fine-tune epochs**: 25
#### Metrics

## Citation
```
@article{turker2024vbart,
title={VBART: The Turkish LLM},
author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
journal={arXiv preprint arXiv:2403.01308},
year={2024}
}
```
| null |
Non_BioNLP
|
# VBART Model Card
## Model Description
VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from scratch on a large scale. It was pre-trained by VNGRS in February 2023.
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for title generation from news body task.
- **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
- **Model type:** Transformer encoder-decoder based on mBART architecture
- **Language(s) (NLP):** Turkish
- **License:** CC BY-NC-SA 4.0
- **Finetuned from:** VBART-Large
- **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("vngrs-ai/VBART-Large-Title-Generation-from-News",
model_input_names=['input_ids', 'attention_mask'])
# Uncomment the device_map kwarg and delete the closing bracket to use model for inference on GPU
model = AutoModelForSeq2SeqLM.from_pretrained("vngrs-ai/VBART-Large-Title-Generation-from-News")#, device_map="auto")
input_text="..."
token_input = tokenizer(input_text, return_tensors="pt")#.to('cuda')
outputs = model.generate(**token_input)
print(tokenizer.decode(outputs[0]))
```
## Training Details
### Training Data
The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308).
The fine-tuning dataset is the Turkish sections of [MLSum](https://huggingface.co/datasets/mlsum), [TRNews](https://huggingface.co/datasets/batubayk/TR-News) and [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum) datasets.
### Limitations
This model is fine-tuned for title generation tasks. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.
### Training Procedure
Pre-trained for 30 days and for a total of 708B tokens. Finetuned for 25 epoch.
#### Hardware
- **GPUs**: 8 x Nvidia A100-80 GB
#### Software
- TensorFlow
#### Hyperparameters
##### Pretraining
- **Training regime:** fp16 mixed precision
- **Training objective**: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution λ=3.5, masking 30% of tokens)
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
- **Scheduler**: Custom scheduler from the original Transformers paper (20,000 warm-up steps)
- **Dropout**: 0.1 (dropped to 0.05 and then to 0 in the last 165k and 205k steps, respectively)
- **Initial Learning rate**: 5e-6
- **Training tokens**: 708B
##### Fine-tuning
- **Training regime:** fp16 mixed precision
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
- **Scheduler**: Linear decay scheduler
- **Dropout**: 0.1
- **Learning rate**: 5e-5
- **Fine-tune epochs**: 25
#### Metrics

## Citation
```
@article{turker2024vbart,
title={VBART: The Turkish LLM},
author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
journal={arXiv preprint arXiv:2403.01308},
year={2024}
}
```
|
{"datasets": ["vngrs-ai/vngrs-web-corpus"], "language": ["tr"], "library_name": "transformers", "license": "cc-by-nc-sa-4.0", "pipeline_tag": "text2text-generation", "inference": {"parameters": {"max_new_tokens": 32}}, "arXiv": 2403.01308}
|
task
|
[
"SUMMARIZATION",
"PARAPHRASING"
] | 43,578 |
srikarvar/fine_tuned_model_5
|
srikarvar
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2332",
"loss:OnlineContrastiveLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-28T10:40:08Z |
2024-08-28T10:40:37+00:00
| 10 | 0 |
---
base_model: intfloat/multilingual-e5-small
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2332
- loss:OnlineContrastiveLoss
widget:
- source_sentence: Who discovered the structure of DNA?
sentences:
- Who discovered the structure of RNA?
- Steps to apply for a scholarship
- First human to set foot on the moon
- source_sentence: Who directed 'Schindler's List'?
sentences:
- Who directed 'The Pianist'?
- What are some high paying jobs for a fresher with an M.Tech in biotechnology?
- Where can I find gluten-free restaurants?
- source_sentence: Which is the best shares to purchase and sale daily trading?
sentences:
- In Sydney, which company would be the best to get advice for Business Sales &
Purchases?
- Steps to adjust phone settings
- Is a 3.8 GPA sufficient to get into a top school?
- source_sentence: Nd she is always sad?
sentences:
- Where to purchase organic produce
- Aerodynamically what happens when propellor rotates?
- How is unsupervised learning used for data insights?
- source_sentence: How to bake a pie?
sentences:
- What is the population of Chicago?
- Steps to bake a pie
- 'What is the distribution of traffic between Google organic search results? e.g.
#1 vs. #2 in rankings, first page vs. second page'
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class dev
type: pair-class-dev
metrics:
- type: cosine_accuracy
value: 0.8653846153846154
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.872760534286499
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.8656716417910447
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8200240135192871
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.8285714285714286
name: Cosine Precision
- type: cosine_recall
value: 0.90625
name: Cosine Recall
- type: cosine_ap
value: 0.9322624848213654
name: Cosine Ap
- type: dot_accuracy
value: 0.8653846153846154
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.872760534286499
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.8656716417910447
name: Dot F1
- type: dot_f1_threshold
value: 0.8200240135192871
name: Dot F1 Threshold
- type: dot_precision
value: 0.8285714285714286
name: Dot Precision
- type: dot_recall
value: 0.90625
name: Dot Recall
- type: dot_ap
value: 0.9322624848213654
name: Dot Ap
- type: manhattan_accuracy
value: 0.8692307692307693
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 9.252302169799805
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.8721804511278196
name: Manhattan F1
- type: manhattan_f1_threshold
value: 9.252302169799805
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.8405797101449275
name: Manhattan Precision
- type: manhattan_recall
value: 0.90625
name: Manhattan Recall
- type: manhattan_ap
value: 0.9322911488571455
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.8653846153846154
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.5044240355491638
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.8656716417910447
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.5999571084976196
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.8285714285714286
name: Euclidean Precision
- type: euclidean_recall
value: 0.90625
name: Euclidean Recall
- type: euclidean_ap
value: 0.9322624848213654
name: Euclidean Ap
- type: max_accuracy
value: 0.8692307692307693
name: Max Accuracy
- type: max_accuracy_threshold
value: 9.252302169799805
name: Max Accuracy Threshold
- type: max_f1
value: 0.8721804511278196
name: Max F1
- type: max_f1_threshold
value: 9.252302169799805
name: Max F1 Threshold
- type: max_precision
value: 0.8405797101449275
name: Max Precision
- type: max_recall
value: 0.90625
name: Max Recall
- type: max_ap
value: 0.9322911488571455
name: Max Ap
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class test
type: pair-class-test
metrics:
- type: cosine_accuracy
value: 0.916
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.844039261341095
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.907488986784141
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8230063319206238
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.8728813559322034
name: Cosine Precision
- type: cosine_recall
value: 0.944954128440367
name: Cosine Recall
- type: cosine_ap
value: 0.96095333014952
name: Cosine Ap
- type: dot_accuracy
value: 0.916
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.8440393209457397
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.907488986784141
name: Dot F1
- type: dot_f1_threshold
value: 0.8230063319206238
name: Dot F1 Threshold
- type: dot_precision
value: 0.8728813559322034
name: Dot Precision
- type: dot_recall
value: 0.944954128440367
name: Dot Recall
- type: dot_ap
value: 0.96095333014952
name: Dot Ap
- type: manhattan_accuracy
value: 0.916
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 8.581160545349121
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.907488986784141
name: Manhattan F1
- type: manhattan_f1_threshold
value: 9.327116012573242
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.8728813559322034
name: Manhattan Precision
- type: manhattan_recall
value: 0.944954128440367
name: Manhattan Recall
- type: manhattan_ap
value: 0.9612698712458685
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.916
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.5584936141967773
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.907488986784141
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.594968318939209
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.8728813559322034
name: Euclidean Precision
- type: euclidean_recall
value: 0.944954128440367
name: Euclidean Recall
- type: euclidean_ap
value: 0.96095333014952
name: Euclidean Ap
- type: max_accuracy
value: 0.916
name: Max Accuracy
- type: max_accuracy_threshold
value: 8.581160545349121
name: Max Accuracy Threshold
- type: max_f1
value: 0.907488986784141
name: Max F1
- type: max_f1_threshold
value: 9.327116012573242
name: Max F1 Threshold
- type: max_precision
value: 0.8728813559322034
name: Max Precision
- type: max_recall
value: 0.944954128440367
name: Max Recall
- type: max_ap
value: 0.9612698712458685
name: Max Ap
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/fine_tuned_model_5")
# Run inference
sentences = [
'How to bake a pie?',
'Steps to bake a pie',
'What is the population of Chicago?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `pair-class-dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.8654 |
| cosine_accuracy_threshold | 0.8728 |
| cosine_f1 | 0.8657 |
| cosine_f1_threshold | 0.82 |
| cosine_precision | 0.8286 |
| cosine_recall | 0.9062 |
| cosine_ap | 0.9323 |
| dot_accuracy | 0.8654 |
| dot_accuracy_threshold | 0.8728 |
| dot_f1 | 0.8657 |
| dot_f1_threshold | 0.82 |
| dot_precision | 0.8286 |
| dot_recall | 0.9062 |
| dot_ap | 0.9323 |
| manhattan_accuracy | 0.8692 |
| manhattan_accuracy_threshold | 9.2523 |
| manhattan_f1 | 0.8722 |
| manhattan_f1_threshold | 9.2523 |
| manhattan_precision | 0.8406 |
| manhattan_recall | 0.9062 |
| manhattan_ap | 0.9323 |
| euclidean_accuracy | 0.8654 |
| euclidean_accuracy_threshold | 0.5044 |
| euclidean_f1 | 0.8657 |
| euclidean_f1_threshold | 0.6 |
| euclidean_precision | 0.8286 |
| euclidean_recall | 0.9062 |
| euclidean_ap | 0.9323 |
| max_accuracy | 0.8692 |
| max_accuracy_threshold | 9.2523 |
| max_f1 | 0.8722 |
| max_f1_threshold | 9.2523 |
| max_precision | 0.8406 |
| max_recall | 0.9062 |
| **max_ap** | **0.9323** |
#### Binary Classification
* Dataset: `pair-class-test`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.916 |
| cosine_accuracy_threshold | 0.844 |
| cosine_f1 | 0.9075 |
| cosine_f1_threshold | 0.823 |
| cosine_precision | 0.8729 |
| cosine_recall | 0.945 |
| cosine_ap | 0.961 |
| dot_accuracy | 0.916 |
| dot_accuracy_threshold | 0.844 |
| dot_f1 | 0.9075 |
| dot_f1_threshold | 0.823 |
| dot_precision | 0.8729 |
| dot_recall | 0.945 |
| dot_ap | 0.961 |
| manhattan_accuracy | 0.916 |
| manhattan_accuracy_threshold | 8.5812 |
| manhattan_f1 | 0.9075 |
| manhattan_f1_threshold | 9.3271 |
| manhattan_precision | 0.8729 |
| manhattan_recall | 0.945 |
| manhattan_ap | 0.9613 |
| euclidean_accuracy | 0.916 |
| euclidean_accuracy_threshold | 0.5585 |
| euclidean_f1 | 0.9075 |
| euclidean_f1_threshold | 0.595 |
| euclidean_precision | 0.8729 |
| euclidean_recall | 0.945 |
| euclidean_ap | 0.961 |
| max_accuracy | 0.916 |
| max_accuracy_threshold | 8.5812 |
| max_f1 | 0.9075 |
| max_f1_threshold | 9.3271 |
| max_precision | 0.8729 |
| max_recall | 0.945 |
| **max_ap** | **0.9613** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 2,332 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 12.96 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.67 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>0: ~52.80%</li><li>1: ~47.20%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------------------------------------|:---------------------------------------------------------|:---------------|
| <code>How to bake a chocolate cake?</code> | <code>Recipe for baking a chocolate cake</code> | <code>1</code> |
| <code>Why do girls want to be friends with the guy they reject?</code> | <code>How do guys feel after rejecting a girl?</code> | <code>0</code> |
| <code>How can I stop being afraid of working?</code> | <code>How do you stop being afraid of everything?</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 260 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.44 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.99 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>0: ~50.77%</li><li>1: ~49.23%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------|:--------------------------------------------------|:---------------|
| <code>How to cook spaghetti?</code> | <code>Steps to cook spaghetti</code> | <code>1</code> |
| <code>How to create a mobile app?</code> | <code>How to create a desktop application?</code> | <code>0</code> |
| <code>How can I update my resume?</code> | <code>Steps to revise and update a resume</code> | <code>1</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | pair-class-dev_max_ap | pair-class-test_max_ap |
|:-------:|:------:|:-------------:|:----------:|:---------------------:|:----------------------:|
| 0 | 0 | - | - | 0.6979 | - |
| 0.2740 | 10 | 1.9007 | - | - | - |
| 0.5479 | 20 | 1.1616 | - | - | - |
| 0.8219 | 30 | 0.9094 | - | - | - |
| 0.9863 | 36 | - | 0.7692 | 0.9117 | - |
| 1.0959 | 40 | 0.9105 | - | - | - |
| 1.3699 | 50 | 0.6629 | - | - | - |
| 1.6438 | 60 | 0.4243 | - | - | - |
| 1.9178 | 70 | 0.4729 | - | - | - |
| **2.0** | **73** | **-** | **0.7294** | **0.9306** | **-** |
| 2.1918 | 80 | 0.4897 | - | - | - |
| 2.4658 | 90 | 0.3103 | - | - | - |
| 2.7397 | 100 | 0.2316 | - | - | - |
| 2.9863 | 109 | - | 0.7807 | 0.9311 | - |
| 3.0137 | 110 | 0.3179 | - | - | - |
| 3.2877 | 120 | 0.1975 | - | - | - |
| 3.5616 | 130 | 0.1477 | - | - | - |
| 3.8356 | 140 | 0.1034 | - | - | - |
| 3.9452 | 144 | - | 0.8132 | 0.9323 | 0.9613 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/fine_tuned_model_5")
# Run inference
sentences = [
'How to bake a pie?',
'Steps to bake a pie',
'What is the population of Chicago?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `pair-class-dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.8654 |
| cosine_accuracy_threshold | 0.8728 |
| cosine_f1 | 0.8657 |
| cosine_f1_threshold | 0.82 |
| cosine_precision | 0.8286 |
| cosine_recall | 0.9062 |
| cosine_ap | 0.9323 |
| dot_accuracy | 0.8654 |
| dot_accuracy_threshold | 0.8728 |
| dot_f1 | 0.8657 |
| dot_f1_threshold | 0.82 |
| dot_precision | 0.8286 |
| dot_recall | 0.9062 |
| dot_ap | 0.9323 |
| manhattan_accuracy | 0.8692 |
| manhattan_accuracy_threshold | 9.2523 |
| manhattan_f1 | 0.8722 |
| manhattan_f1_threshold | 9.2523 |
| manhattan_precision | 0.8406 |
| manhattan_recall | 0.9062 |
| manhattan_ap | 0.9323 |
| euclidean_accuracy | 0.8654 |
| euclidean_accuracy_threshold | 0.5044 |
| euclidean_f1 | 0.8657 |
| euclidean_f1_threshold | 0.6 |
| euclidean_precision | 0.8286 |
| euclidean_recall | 0.9062 |
| euclidean_ap | 0.9323 |
| max_accuracy | 0.8692 |
| max_accuracy_threshold | 9.2523 |
| max_f1 | 0.8722 |
| max_f1_threshold | 9.2523 |
| max_precision | 0.8406 |
| max_recall | 0.9062 |
| **max_ap** | **0.9323** |
#### Binary Classification
* Dataset: `pair-class-test`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.916 |
| cosine_accuracy_threshold | 0.844 |
| cosine_f1 | 0.9075 |
| cosine_f1_threshold | 0.823 |
| cosine_precision | 0.8729 |
| cosine_recall | 0.945 |
| cosine_ap | 0.961 |
| dot_accuracy | 0.916 |
| dot_accuracy_threshold | 0.844 |
| dot_f1 | 0.9075 |
| dot_f1_threshold | 0.823 |
| dot_precision | 0.8729 |
| dot_recall | 0.945 |
| dot_ap | 0.961 |
| manhattan_accuracy | 0.916 |
| manhattan_accuracy_threshold | 8.5812 |
| manhattan_f1 | 0.9075 |
| manhattan_f1_threshold | 9.3271 |
| manhattan_precision | 0.8729 |
| manhattan_recall | 0.945 |
| manhattan_ap | 0.9613 |
| euclidean_accuracy | 0.916 |
| euclidean_accuracy_threshold | 0.5585 |
| euclidean_f1 | 0.9075 |
| euclidean_f1_threshold | 0.595 |
| euclidean_precision | 0.8729 |
| euclidean_recall | 0.945 |
| euclidean_ap | 0.961 |
| max_accuracy | 0.916 |
| max_accuracy_threshold | 8.5812 |
| max_f1 | 0.9075 |
| max_f1_threshold | 9.3271 |
| max_precision | 0.8729 |
| max_recall | 0.945 |
| **max_ap** | **0.9613** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 2,332 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 12.96 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.67 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>0: ~52.80%</li><li>1: ~47.20%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------------------------------------|:---------------------------------------------------------|:---------------|
| <code>How to bake a chocolate cake?</code> | <code>Recipe for baking a chocolate cake</code> | <code>1</code> |
| <code>Why do girls want to be friends with the guy they reject?</code> | <code>How do guys feel after rejecting a girl?</code> | <code>0</code> |
| <code>How can I stop being afraid of working?</code> | <code>How do you stop being afraid of everything?</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 260 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.44 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.99 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>0: ~50.77%</li><li>1: ~49.23%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------|:--------------------------------------------------|:---------------|
| <code>How to cook spaghetti?</code> | <code>Steps to cook spaghetti</code> | <code>1</code> |
| <code>How to create a mobile app?</code> | <code>How to create a desktop application?</code> | <code>0</code> |
| <code>How can I update my resume?</code> | <code>Steps to revise and update a resume</code> | <code>1</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | pair-class-dev_max_ap | pair-class-test_max_ap |
|:-------:|:------:|:-------------:|:----------:|:---------------------:|:----------------------:|
| 0 | 0 | - | - | 0.6979 | - |
| 0.2740 | 10 | 1.9007 | - | - | - |
| 0.5479 | 20 | 1.1616 | - | - | - |
| 0.8219 | 30 | 0.9094 | - | - | - |
| 0.9863 | 36 | - | 0.7692 | 0.9117 | - |
| 1.0959 | 40 | 0.9105 | - | - | - |
| 1.3699 | 50 | 0.6629 | - | - | - |
| 1.6438 | 60 | 0.4243 | - | - | - |
| 1.9178 | 70 | 0.4729 | - | - | - |
| **2.0** | **73** | **-** | **0.7294** | **0.9306** | **-** |
| 2.1918 | 80 | 0.4897 | - | - | - |
| 2.4658 | 90 | 0.3103 | - | - | - |
| 2.7397 | 100 | 0.2316 | - | - | - |
| 2.9863 | 109 | - | 0.7807 | 0.9311 | - |
| 3.0137 | 110 | 0.3179 | - | - | - |
| 3.2877 | 120 | 0.1975 | - | - | - |
| 3.5616 | 130 | 0.1477 | - | - | - |
| 3.8356 | 140 | 0.1034 | - | - | - |
| 3.9452 | 144 | - | 0.8132 | 0.9323 | 0.9613 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "intfloat/multilingual-e5-small", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy", "cosine_accuracy_threshold", "cosine_f1", "cosine_f1_threshold", "cosine_precision", "cosine_recall", "cosine_ap", "dot_accuracy", "dot_accuracy_threshold", "dot_f1", "dot_f1_threshold", "dot_precision", "dot_recall", "dot_ap", "manhattan_accuracy", "manhattan_accuracy_threshold", "manhattan_f1", "manhattan_f1_threshold", "manhattan_precision", "manhattan_recall", "manhattan_ap", "euclidean_accuracy", "euclidean_accuracy_threshold", "euclidean_f1", "euclidean_f1_threshold", "euclidean_precision", "euclidean_recall", "euclidean_ap", "max_accuracy", "max_accuracy_threshold", "max_f1", "max_f1_threshold", "max_precision", "max_recall", "max_ap"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:2332", "loss:OnlineContrastiveLoss"], "widget": [{"source_sentence": "Who discovered the structure of DNA?", "sentences": ["Who discovered the structure of RNA?", "Steps to apply for a scholarship", "First human to set foot on the moon"]}, {"source_sentence": "Who directed 'Schindler's List'?", "sentences": ["Who directed 'The Pianist'?", "What are some high paying jobs for a fresher with an M.Tech in biotechnology?", "Where can I find gluten-free restaurants?"]}, {"source_sentence": "Which is the best shares to purchase and sale daily trading?", "sentences": ["In Sydney, which company would be the best to get advice for Business Sales & Purchases?", "Steps to adjust phone settings", "Is a 3.8 GPA sufficient to get into a top school?"]}, {"source_sentence": "Nd she is always sad?", "sentences": ["Where to purchase organic produce", "Aerodynamically what happens when propellor rotates?", "How is unsupervised learning used for data insights?"]}, {"source_sentence": "How to bake a pie?", "sentences": ["What is the population of Chicago?", "Steps to bake a pie", "What is the distribution of traffic between Google organic search results? e.g. #1 vs. #2 in rankings, first page vs. second page"]}], "model-index": [{"name": "SentenceTransformer based on intfloat/multilingual-e5-small", "results": [{"task": {"type": "binary-classification", "name": "Binary Classification"}, "dataset": {"name": "pair class dev", "type": "pair-class-dev"}, "metrics": [{"type": "cosine_accuracy", "value": 0.8653846153846154, "name": "Cosine Accuracy"}, {"type": "cosine_accuracy_threshold", "value": 0.872760534286499, "name": "Cosine Accuracy Threshold"}, {"type": "cosine_f1", "value": 0.8656716417910447, "name": "Cosine F1"}, {"type": "cosine_f1_threshold", "value": 0.8200240135192871, "name": "Cosine F1 Threshold"}, {"type": "cosine_precision", "value": 0.8285714285714286, "name": "Cosine Precision"}, {"type": "cosine_recall", "value": 0.90625, "name": "Cosine Recall"}, {"type": "cosine_ap", "value": 0.9322624848213654, "name": "Cosine Ap"}, {"type": "dot_accuracy", "value": 0.8653846153846154, "name": "Dot Accuracy"}, {"type": "dot_accuracy_threshold", "value": 0.872760534286499, "name": "Dot Accuracy Threshold"}, {"type": "dot_f1", "value": 0.8656716417910447, "name": "Dot F1"}, {"type": "dot_f1_threshold", "value": 0.8200240135192871, "name": "Dot F1 Threshold"}, {"type": "dot_precision", "value": 0.8285714285714286, "name": "Dot Precision"}, {"type": "dot_recall", "value": 0.90625, "name": "Dot Recall"}, {"type": "dot_ap", "value": 0.9322624848213654, "name": "Dot Ap"}, {"type": "manhattan_accuracy", "value": 0.8692307692307693, "name": "Manhattan Accuracy"}, {"type": "manhattan_accuracy_threshold", "value": 9.252302169799805, "name": "Manhattan Accuracy Threshold"}, {"type": "manhattan_f1", "value": 0.8721804511278196, "name": "Manhattan F1"}, {"type": "manhattan_f1_threshold", "value": 9.252302169799805, "name": "Manhattan F1 Threshold"}, {"type": "manhattan_precision", "value": 0.8405797101449275, "name": "Manhattan Precision"}, {"type": "manhattan_recall", "value": 0.90625, "name": "Manhattan Recall"}, {"type": "manhattan_ap", "value": 0.9322911488571455, "name": "Manhattan Ap"}, {"type": "euclidean_accuracy", "value": 0.8653846153846154, "name": "Euclidean Accuracy"}, {"type": "euclidean_accuracy_threshold", "value": 0.5044240355491638, "name": "Euclidean Accuracy Threshold"}, {"type": "euclidean_f1", "value": 0.8656716417910447, "name": "Euclidean F1"}, {"type": "euclidean_f1_threshold", "value": 0.5999571084976196, "name": "Euclidean F1 Threshold"}, {"type": "euclidean_precision", "value": 0.8285714285714286, "name": "Euclidean Precision"}, {"type": "euclidean_recall", "value": 0.90625, "name": "Euclidean Recall"}, {"type": "euclidean_ap", "value": 0.9322624848213654, "name": "Euclidean Ap"}, {"type": "max_accuracy", "value": 0.8692307692307693, "name": "Max Accuracy"}, {"type": "max_accuracy_threshold", "value": 9.252302169799805, "name": "Max Accuracy Threshold"}, {"type": "max_f1", "value": 0.8721804511278196, "name": "Max F1"}, {"type": "max_f1_threshold", "value": 9.252302169799805, "name": "Max F1 Threshold"}, {"type": "max_precision", "value": 0.8405797101449275, "name": "Max Precision"}, {"type": "max_recall", "value": 0.90625, "name": "Max Recall"}, {"type": "max_ap", "value": 0.9322911488571455, "name": "Max Ap"}]}, {"task": {"type": "binary-classification", "name": "Binary Classification"}, "dataset": {"name": "pair class test", "type": "pair-class-test"}, "metrics": [{"type": "cosine_accuracy", "value": 0.916, "name": "Cosine Accuracy"}, {"type": "cosine_accuracy_threshold", "value": 0.844039261341095, "name": "Cosine Accuracy Threshold"}, {"type": "cosine_f1", "value": 0.907488986784141, "name": "Cosine F1"}, {"type": "cosine_f1_threshold", "value": 0.8230063319206238, "name": "Cosine F1 Threshold"}, {"type": "cosine_precision", "value": 0.8728813559322034, "name": "Cosine Precision"}, {"type": "cosine_recall", "value": 0.944954128440367, "name": "Cosine Recall"}, {"type": "cosine_ap", "value": 0.96095333014952, "name": "Cosine Ap"}, {"type": "dot_accuracy", "value": 0.916, "name": "Dot Accuracy"}, {"type": "dot_accuracy_threshold", "value": 0.8440393209457397, "name": "Dot Accuracy Threshold"}, {"type": "dot_f1", "value": 0.907488986784141, "name": "Dot F1"}, {"type": "dot_f1_threshold", "value": 0.8230063319206238, "name": "Dot F1 Threshold"}, {"type": "dot_precision", "value": 0.8728813559322034, "name": "Dot Precision"}, {"type": "dot_recall", "value": 0.944954128440367, "name": "Dot Recall"}, {"type": "dot_ap", "value": 0.96095333014952, "name": "Dot Ap"}, {"type": "manhattan_accuracy", "value": 0.916, "name": "Manhattan Accuracy"}, {"type": "manhattan_accuracy_threshold", "value": 8.581160545349121, "name": "Manhattan Accuracy Threshold"}, {"type": "manhattan_f1", "value": 0.907488986784141, "name": "Manhattan F1"}, {"type": "manhattan_f1_threshold", "value": 9.327116012573242, "name": "Manhattan F1 Threshold"}, {"type": "manhattan_precision", "value": 0.8728813559322034, "name": "Manhattan Precision"}, {"type": "manhattan_recall", "value": 0.944954128440367, "name": "Manhattan Recall"}, {"type": "manhattan_ap", "value": 0.9612698712458685, "name": "Manhattan Ap"}, {"type": "euclidean_accuracy", "value": 0.916, "name": "Euclidean Accuracy"}, {"type": "euclidean_accuracy_threshold", "value": 0.5584936141967773, "name": "Euclidean Accuracy Threshold"}, {"type": "euclidean_f1", "value": 0.907488986784141, "name": "Euclidean F1"}, {"type": "euclidean_f1_threshold", "value": 0.594968318939209, "name": "Euclidean F1 Threshold"}, {"type": "euclidean_precision", "value": 0.8728813559322034, "name": "Euclidean Precision"}, {"type": "euclidean_recall", "value": 0.944954128440367, "name": "Euclidean Recall"}, {"type": "euclidean_ap", "value": 0.96095333014952, "name": "Euclidean Ap"}, {"type": "max_accuracy", "value": 0.916, "name": "Max Accuracy"}, {"type": "max_accuracy_threshold", "value": 8.581160545349121, "name": "Max Accuracy Threshold"}, {"type": "max_f1", "value": 0.907488986784141, "name": "Max F1"}, {"type": "max_f1_threshold", "value": 9.327116012573242, "name": "Max F1 Threshold"}, {"type": "max_precision", "value": 0.8728813559322034, "name": "Max Precision"}, {"type": "max_recall", "value": 0.944954128440367, "name": "Max Recall"}, {"type": "max_ap", "value": 0.9612698712458685, "name": "Max Ap"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,579 |
mini1013/master_cate_fd14
|
mini1013
|
text-classification
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | 2024-11-27T11:42:44Z |
2024-11-27T11:43:04+00:00
| 843 | 0 |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 네스프레소 버츄오 캡슐 머그 멜로지오 1Box (10캡슐) 아이스 라떼 03. 알티시오 제이유
- text: 맥심 티오피 스위트 아메리카노 200ml (주)디에이치솔루션
- text: 굿라이프365 스피아민트 삼각티백 50개입 익모초 삼각티백 50개입 주식회사 굿라이프365
- text: 칠성사이다 제로 ECO 무라벨 300ml 20pet [음료] 커피음료_맥심티오피심플리스무스로스티라떼360mlx20개 옐로우로켓
- text: 동서식품 kanu 미니 마일드 로스트 아메리카노 0.9g 카누디카페인 0.9g 100+20(120개입) 강유팩토리
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.6535632816801699
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 12 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 10.0 | <ul><li>'라인바싸 탄산수 레몬 500ml 20개 자몽 500ml 20개 에이치앤제이원'</li><li>'라인바싸 탄산수 파인애플 500ml 20입 1박스 (추가)+ 플레인 1박스 동아오츠카주식회사'</li><li>'코카콜라 씨그램 레몬 350mlx24페트 탄산수모음 15_트레비 라임 355mlx24CAN 주식회사대성에프앤비'</li></ul> |
| 8.0 | <ul><li>'맥심 아이스 커피믹스 110T +커피믹스 스틱 2T 콤부차_다농원 콤부차 세븐베리 20T+보틀 주식회사 경일종합식품 케이마트몰'</li><li>'[카누]카누 디카페인 미니 0.9g x 120개입 - 1개 HN 다크 로스트 0.9g 100+텀블러(사은품) 하나엔피그먼트'</li><li>'프렌치카페 카페믹스 스테비아 디카페인 10.3g x 100개입 대은상사'</li></ul> |
| 1.0 | <ul><li>'매일유업 매일우유 매일두유 99.9 190ml 12개 12개 테켄종합상사'</li><li>'매일유업 마이너피겨스 유기농 오트밀크 1L 주식회사 지룩'</li><li>'아몬드 브리즈 뉴트리플러스 프로틴 190ml 48개 스타일바이맘'</li></ul> |
| 6.0 | <ul><li>'이제부터 무가당 무설탕 생강진액 생강차 생강즙 생강청 1L ★이벤트★ 3+1(생강청)-박스없음_소비자가 태후자연식품영농조합법인'</li><li>'티젠 콤부차 파인애플 5g x 30개입 샤인머스켓(30개입) 엠비알글로벌'</li><li>'[오설록](신세계 본점)세작 80 g(잎차) 주식회사 에스에스지닷컴'</li></ul> |
| 5.0 | <ul><li>'파낙스 참다음 매실 원액 1.5L/6배희석 로쏘 레몬음료 베이스 1L (주) 이카루스'</li><li>'동원 덴마크 푸르티 포도 주스 120mL x 24개 블라썸플라워'</li><li>'썬업 과일야채샐러드 그린 200ml x 24팩 과일야채 샐러드 레드 200ml x 24팩 하니컴퍼니'</li></ul> |
| 9.0 | <ul><li>'허쉬 코코아 가루 분말 226g W-00652_허쉬코코아파우더226g(파손) 월푸드'</li><li>'기라델리 프리미엄 핫코코아믹스 초콜렛 907g X 1박스(4개) 고고커피'</li><li>'Nestle Hot Cocoa 핫 코코아 믹스 30개 0.28온스 207799 무설탕 무지방_2개들이 팩 더블스토어'</li></ul> |
| 4.0 | <ul><li>'코카콜라 태양의 식후비법 W차 500ml (주)디에이치솔루션'</li><li>'광동 힘찬하루 헛개차 1.5L 1개 대패트_게토레이 레몬 1.5L 12개 대영상사'</li><li>'웰그린 스위츠 복숭아 녹차 음료 340ml 티트라 레몬그린티 제로 500mlX24PET 브론스코리아(주)'</li></ul> |
| 0.0 | <ul><li>'레드불 에너지 드링크 355ml (6개) 카페인 타우린 비타민 알프스 워터 대량 구매 노건'</li><li>'청정원 홍초 석류 1.5L 홍초 블루베리 1.5L (주) 이카루스'</li><li>'청정원 홍초 자몽 900ml 아이스티_티오 아이스티 레몬맛40T 주식회사 경일종합식품 케이마트몰'</li></ul> |
| 7.0 | <ul><li>'동서 티오 아이스티 복숭아 70T +커피믹스 스틱 2T 콤부차_다농원 콤부차 리치 20T+보틀 주식회사 경일종합식품 케이마트몰'</li><li>'립톤 아이스티 복숭아 770g 레몬 770g_자몽 아이스티 키트(2개입) 유니레버코리아 (주)'</li><li>'술픽 하이트진로 토닉워터 600ml 대용량 술벙커 주식회사 농업회사법인 이천지점'</li></ul> |
| 11.0 | <ul><li>'포모나 블루베리스무디 2kg 블루베리농축액 (주)제이제이푸드시스템'</li><li>'베오베 오곡 파우더 1kg 라떼 곡물 미숫가루 분말 티에이치커피 티에이치커피'</li><li>'타코 복숭아 아이스티 /선택 08.블루베리라떼870g 주식회사 커피바바'</li></ul> |
| 3.0 | <ul><li>'[매니저배송] MPRO 장&면역+피부 (5개입) (주)에치와이'</li><li>'요플레 닥터캡슐 베리믹스 130mLx4개/1000배/냉장무배 대명유통'</li><li>'매일바이오 알로에 120g 12개_냉장 매일유업 주식회사'</li></ul> |
| 2.0 | <ul><li>'화인바이오 지리산 물하나 2L X 6개 글로벌웨이브'</li><li>'하이트 천연광천수 미네랄 석수 무라벨 500ml 20pet ◇ 석수 무라벨 500ml 20pet 주식회사 부산종합유통'</li><li>'아이시스8.0 300ml x 1BOX(20PET) 생수 아이시스8.0 200ml(40p) (주)하나유통'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.6536 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_fd14")
# Run inference
preds = model("맥심 티오피 스위트 아메리카노 200ml (주)디에이치솔루션")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 9.805 | 20 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
| 9.0 | 50 |
| 10.0 | 50 |
| 11.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0106 | 1 | 0.3763 | - |
| 0.5319 | 50 | 0.3216 | - |
| 1.0638 | 100 | 0.1166 | - |
| 1.5957 | 150 | 0.0863 | - |
| 2.1277 | 200 | 0.0548 | - |
| 2.6596 | 250 | 0.0559 | - |
| 3.1915 | 300 | 0.0323 | - |
| 3.7234 | 350 | 0.0301 | - |
| 4.2553 | 400 | 0.0191 | - |
| 4.7872 | 450 | 0.0127 | - |
| 5.3191 | 500 | 0.0059 | - |
| 5.8511 | 550 | 0.0003 | - |
| 6.3830 | 600 | 0.0002 | - |
| 6.9149 | 650 | 0.0001 | - |
| 7.4468 | 700 | 0.0001 | - |
| 7.9787 | 750 | 0.0001 | - |
| 8.5106 | 800 | 0.0001 | - |
| 9.0426 | 850 | 0.0001 | - |
| 9.5745 | 900 | 0.0001 | - |
| 10.1064 | 950 | 0.0001 | - |
| 10.6383 | 1000 | 0.0001 | - |
| 11.1702 | 1050 | 0.0001 | - |
| 11.7021 | 1100 | 0.0001 | - |
| 12.2340 | 1150 | 0.0001 | - |
| 12.7660 | 1200 | 0.0001 | - |
| 13.2979 | 1250 | 0.0 | - |
| 13.8298 | 1300 | 0.0001 | - |
| 14.3617 | 1350 | 0.0001 | - |
| 14.8936 | 1400 | 0.0001 | - |
| 15.4255 | 1450 | 0.0 | - |
| 15.9574 | 1500 | 0.0 | - |
| 16.4894 | 1550 | 0.0 | - |
| 17.0213 | 1600 | 0.0 | - |
| 17.5532 | 1650 | 0.0 | - |
| 18.0851 | 1700 | 0.0 | - |
| 18.6170 | 1750 | 0.0 | - |
| 19.1489 | 1800 | 0.0 | - |
| 19.6809 | 1850 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 12 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 10.0 | <ul><li>'라인바싸 탄산수 레몬 500ml 20개 자몽 500ml 20개 에이치앤제이원'</li><li>'라인바싸 탄산수 파인애플 500ml 20입 1박스 (추가)+ 플레인 1박스 동아오츠카주식회사'</li><li>'코카콜라 씨그램 레몬 350mlx24페트 탄산수모음 15_트레비 라임 355mlx24CAN 주식회사대성에프앤비'</li></ul> |
| 8.0 | <ul><li>'맥심 아이스 커피믹스 110T +커피믹스 스틱 2T 콤부차_다농원 콤부차 세븐베리 20T+보틀 주식회사 경일종합식품 케이마트몰'</li><li>'[카누]카누 디카페인 미니 0.9g x 120개입 - 1개 HN 다크 로스트 0.9g 100+텀블러(사은품) 하나엔피그먼트'</li><li>'프렌치카페 카페믹스 스테비아 디카페인 10.3g x 100개입 대은상사'</li></ul> |
| 1.0 | <ul><li>'매일유업 매일우유 매일두유 99.9 190ml 12개 12개 테켄종합상사'</li><li>'매일유업 마이너피겨스 유기농 오트밀크 1L 주식회사 지룩'</li><li>'아몬드 브리즈 뉴트리플러스 프로틴 190ml 48개 스타일바이맘'</li></ul> |
| 6.0 | <ul><li>'이제부터 무가당 무설탕 생강진액 생강차 생강즙 생강청 1L ★이벤트★ 3+1(생강청)-박스없음_소비자가 태후자연식품영농조합법인'</li><li>'티젠 콤부차 파인애플 5g x 30개입 샤인머스켓(30개입) 엠비알글로벌'</li><li>'[오설록](신세계 본점)세작 80 g(잎차) 주식회사 에스에스지닷컴'</li></ul> |
| 5.0 | <ul><li>'파낙스 참다음 매실 원액 1.5L/6배희석 로쏘 레몬음료 베이스 1L (주) 이카루스'</li><li>'동원 덴마크 푸르티 포도 주스 120mL x 24개 블라썸플라워'</li><li>'썬업 과일야채샐러드 그린 200ml x 24팩 과일야채 샐러드 레드 200ml x 24팩 하니컴퍼니'</li></ul> |
| 9.0 | <ul><li>'허쉬 코코아 가루 분말 226g W-00652_허쉬코코아파우더226g(파손) 월푸드'</li><li>'기라델리 프리미엄 핫코코아믹스 초콜렛 907g X 1박스(4개) 고고커피'</li><li>'Nestle Hot Cocoa 핫 코코아 믹스 30개 0.28온스 207799 무설탕 무지방_2개들이 팩 더블스토어'</li></ul> |
| 4.0 | <ul><li>'코카콜라 태양의 식후비법 W차 500ml (주)디에이치솔루션'</li><li>'광동 힘찬하루 헛개차 1.5L 1개 대패트_게토레이 레몬 1.5L 12개 대영상사'</li><li>'웰그린 스위츠 복숭아 녹차 음료 340ml 티트라 레몬그린티 제로 500mlX24PET 브론스코리아(주)'</li></ul> |
| 0.0 | <ul><li>'레드불 에너지 드링크 355ml (6개) 카페인 타우린 비타민 알프스 워터 대량 구매 노건'</li><li>'청정원 홍초 석류 1.5L 홍초 블루베리 1.5L (주) 이카루스'</li><li>'청정원 홍초 자몽 900ml 아이스티_티오 아이스티 레몬맛40T 주식회사 경일종합식품 케이마트몰'</li></ul> |
| 7.0 | <ul><li>'동서 티오 아이스티 복숭아 70T +커피믹스 스틱 2T 콤부차_다농원 콤부차 리치 20T+보틀 주식회사 경일종합식품 케이마트몰'</li><li>'립톤 아이스티 복숭아 770g 레몬 770g_자몽 아이스티 키트(2개입) 유니레버코리아 (주)'</li><li>'술픽 하이트진로 토닉워터 600ml 대용량 술벙커 주식회사 농업회사법인 이천지점'</li></ul> |
| 11.0 | <ul><li>'포모나 블루베리스무디 2kg 블루베리농축액 (주)제이제이푸드시스템'</li><li>'베오베 오곡 파우더 1kg 라떼 곡물 미숫가루 분말 티에이치커피 티에이치커피'</li><li>'타코 복숭아 아이스티 /선택 08.블루베리라떼870g 주식회사 커피바바'</li></ul> |
| 3.0 | <ul><li>'[매니저배송] MPRO 장&면역+피부 (5개입) (주)에치와이'</li><li>'요플레 닥터캡슐 베리믹스 130mLx4개/1000배/냉장무배 대명유통'</li><li>'매일바이오 알로에 120g 12개_냉장 매일유업 주식회사'</li></ul> |
| 2.0 | <ul><li>'화인바이오 지리산 물하나 2L X 6개 글로벌웨이브'</li><li>'하이트 천연광천수 미네랄 석수 무라벨 500ml 20pet ◇ 석수 무라벨 500ml 20pet 주식회사 부산종합유통'</li><li>'아이시스8.0 300ml x 1BOX(20PET) 생수 아이시스8.0 200ml(40p) (주)하나유통'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.6536 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_fd14")
# Run inference
preds = model("맥심 티오피 스위트 아메리카노 200ml (주)디에이치솔루션")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 9.805 | 20 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
| 9.0 | 50 |
| 10.0 | 50 |
| 11.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0106 | 1 | 0.3763 | - |
| 0.5319 | 50 | 0.3216 | - |
| 1.0638 | 100 | 0.1166 | - |
| 1.5957 | 150 | 0.0863 | - |
| 2.1277 | 200 | 0.0548 | - |
| 2.6596 | 250 | 0.0559 | - |
| 3.1915 | 300 | 0.0323 | - |
| 3.7234 | 350 | 0.0301 | - |
| 4.2553 | 400 | 0.0191 | - |
| 4.7872 | 450 | 0.0127 | - |
| 5.3191 | 500 | 0.0059 | - |
| 5.8511 | 550 | 0.0003 | - |
| 6.3830 | 600 | 0.0002 | - |
| 6.9149 | 650 | 0.0001 | - |
| 7.4468 | 700 | 0.0001 | - |
| 7.9787 | 750 | 0.0001 | - |
| 8.5106 | 800 | 0.0001 | - |
| 9.0426 | 850 | 0.0001 | - |
| 9.5745 | 900 | 0.0001 | - |
| 10.1064 | 950 | 0.0001 | - |
| 10.6383 | 1000 | 0.0001 | - |
| 11.1702 | 1050 | 0.0001 | - |
| 11.7021 | 1100 | 0.0001 | - |
| 12.2340 | 1150 | 0.0001 | - |
| 12.7660 | 1200 | 0.0001 | - |
| 13.2979 | 1250 | 0.0 | - |
| 13.8298 | 1300 | 0.0001 | - |
| 14.3617 | 1350 | 0.0001 | - |
| 14.8936 | 1400 | 0.0001 | - |
| 15.4255 | 1450 | 0.0 | - |
| 15.9574 | 1500 | 0.0 | - |
| 16.4894 | 1550 | 0.0 | - |
| 17.0213 | 1600 | 0.0 | - |
| 17.5532 | 1650 | 0.0 | - |
| 18.0851 | 1700 | 0.0 | - |
| 18.6170 | 1750 | 0.0 | - |
| 19.1489 | 1800 | 0.0 | - |
| 19.6809 | 1850 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "mini1013/master_domain", "library_name": "setfit", "metrics": ["metric"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "네스프레소 버츄오 캡슐 머그 멜로지오 1Box (10캡슐) 아이스 라떼 03. 알티시오 제이유"}, {"text": "맥심 티오피 스위트 아메리카노 200ml (주)디에이치솔루션"}, {"text": "굿라이프365 스피아민트 삼각티백 50개입 익모초 삼각티백 50개입 주식회사 굿라이프365"}, {"text": "칠성사이다 제로 ECO 무라벨 300ml 20pet [음료] 커피음료_맥심티오피심플리스무스로스티라떼360mlx20개 옐로우로켓"}, {"text": "동서식품 kanu 미니 마일드 로스트 아메리카노 0.9g 카누디카페인 0.9g 100+20(120개입) 강유팩토리"}], "inference": true, "model-index": [{"name": "SetFit with mini1013/master_domain", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "metric", "value": 0.6535632816801699, "name": "Metric"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 43,580 |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task252
|
Lots-of-LoRAs
| null |
[
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 2025-01-02T14:29:09Z |
2025-01-02T14:29:14+00:00
| 0 | 0 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task252
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task252_spl_translation_en_tr
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task252_spl_translation_en_tr sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task252
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task252_spl_translation_en_tr
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task252_spl_translation_en_tr sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
|
task
|
[
"TRANSLATION"
] | 43,581 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.