id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gokuls/hBERTv1_no_pretrain_stsb
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-29T11:16:32Z |
2023-06-15T11:34:39+00:00
| 8 | 0 |
---
datasets:
- glue
language:
- en
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: hBERTv1_no_pretrain_stsb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- type: spearmanr
value: 0.21371019463671115
name: Spearmanr
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_no_pretrain_stsb
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1733
- Pearson: 0.2374
- Spearmanr: 0.2137
- Combined Score: 0.2256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.3601 | 1.0 | 60 | 2.6639 | 0.1059 | 0.1080 | 0.1069 |
| 1.9983 | 2.0 | 120 | 2.1733 | 0.2374 | 0.2137 | 0.2256 |
| 1.7079 | 3.0 | 180 | 2.5000 | 0.1872 | 0.1967 | 0.1920 |
| 1.3775 | 4.0 | 240 | 3.1203 | 0.2177 | 0.2251 | 0.2214 |
| 1.1218 | 5.0 | 300 | 2.8260 | 0.2609 | 0.2598 | 0.2603 |
| 0.8882 | 6.0 | 360 | 2.5413 | 0.3099 | 0.3062 | 0.3081 |
| 0.728 | 7.0 | 420 | 2.4024 | 0.3429 | 0.3468 | 0.3448 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_no_pretrain_stsb
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1733
- Pearson: 0.2374
- Spearmanr: 0.2137
- Combined Score: 0.2256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.3601 | 1.0 | 60 | 2.6639 | 0.1059 | 0.1080 | 0.1069 |
| 1.9983 | 2.0 | 120 | 2.1733 | 0.2374 | 0.2137 | 0.2256 |
| 1.7079 | 3.0 | 180 | 2.5000 | 0.1872 | 0.1967 | 0.1920 |
| 1.3775 | 4.0 | 240 | 3.1203 | 0.2177 | 0.2251 | 0.2214 |
| 1.1218 | 5.0 | 300 | 2.8260 | 0.2609 | 0.2598 | 0.2603 |
| 0.8882 | 6.0 | 360 | 2.5413 | 0.3099 | 0.3062 | 0.3081 |
| 0.728 | 7.0 | 420 | 2.4024 | 0.3429 | 0.3468 | 0.3448 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"datasets": ["glue"], "language": ["en"], "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hBERTv1_no_pretrain_stsb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "config": "stsb", "split": "validation", "args": "stsb"}, "metrics": [{"type": "spearmanr", "value": 0.21371019463671115, "name": "Spearmanr"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,261 |
kotoba-tech/kotoba-whisper-bilingual-v1.0-ggml
|
kotoba-tech
|
automatic-speech-recognition
|
[
"audio",
"automatic-speech-recognition",
"ja",
"license:apache-2.0",
"region:us"
] | 2024-09-29T03:27:56Z |
2024-09-29T06:00:03+00:00
| 0 | 1 |
---
language: ja
license: apache-2.0
tags:
- audio
- automatic-speech-recognition
---
# Kotoba-Whisper-Bilingual: kotoba-whisper-bilingual-v1.0 for Whisper cpp
This repository contains the model weights for [kotoba-tech/kotoba-whisper-bilingual-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-bilingual-v1.0)
converted to [GGML](https://github.com/ggerganov/ggml) format. GGML is the weight format expected by C/C++ packages
such as [Whisper.cpp](https://github.com/ggerganov/whisper.cpp), for which we provide an example below.
## Usage
Kotoba-Whisper can be run with the [Whisper.cpp](https://github.com/ggerganov/whisper.cpp) package with the original
sequential long-form transcription algorithm.
Steps for getting started:
1. Clone the Whisper.cpp repository:
```
git clone https://github.com/ggerganov/whisper.cpp.git
cd whisper.cpp
make
```
2. Download the GGML weights for `kotoba-tech/kotoba-whisper-bilingual-v1.0`:
```bash
wget https://huggingface.co/kotoba-tech/kotoba-whisper-bilingual-v1.0-ggml/resolve/main/ggml-kotoba-whisper-bilingual-v1.0.bin -P ./models
```
3. Run inference using the provided sample audio:
- Download sample audio. Note that it runs only with 16-bit WAV files, so make sure to convert your input before running the tool. For example, you can use ffmpeg like below.
```bash
wget https://huggingface.co/datasets/japanese-asr/en_asr.esb_eval/resolve/main/sample.wav -O sample_en.wav
wget https://huggingface.co/datasets/japanese-asr/ja_asr.jsut_basic5000/resolve/main/sample.flac -O sample_ja.flac
ffmpeg -i sample_en.wav -ar 16000 -ac 1 -c:a pcm_s16le sample_en_fixed.wav
ffmpeg -i sample_ja.flac -ar 16000 -ac 1 -c:a pcm_s16le sample_ja_fixed.wav
```
- ***Japanese ASR***
```
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0.bin -l ja -f sample_ja_fixed.wav --output-file "output.transcribe.ja" --output-json
```
- ***English ASR***
```
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0.bin -l ja -f sample_en_fixed.wav --output-file "output.transcribe.en" --output-json
```
- ***Japanese (speech) to English (text) Translation***
```
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0.bin -tr -l en -f sample_ja_fixed.wav --output-file "output.translate.en" --output-json
```
- ***English (speech) to Japanese (text) Translation***
```
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0.bin -tr -l ja -f sample_en_fixed.wav --output-file "output.translate.ja" --output-json
```
### Benchmark
We measure the inference speed of different kotoba-whisper-v2.0 implementations with four different Japanese speech audio on MacBook Pro with the following spec:
- Apple M2 Pro
- 32GB
- 14-inch, 2023
- OS Sonoma Version 14.4.1 (23E224)
| audio file | audio duration (min)| [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-ggml) (sec) | [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-faster) (sec)| [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0) (sec)
|--------|------|-----|------|-----|
|audio 1 | 50.3 | 581 | 2601 | 807 |
|audio 2 | 5.6 | 41 | 73 | 61 |
|audio 3 | 4.9 | 30 | 141 | 54 |
|audio 4 | 5.6 | 35 | 126 | 69 |
Scripts to re-run the experiment can be found bellow:
* [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-ggml/blob/main/benchmark.sh)
* [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-faster/blob/main/benchmark.sh)
* [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0/blob/main/benchmark.sh)
Currently whisper.cpp and faster-whisper support the [sequential long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form),
and only Huggingface pipeline supports the [chunked long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#chunked-long-form), which we empirically
found better than the sequnential long-form decoding.
### Quantized Model
To use the quantized model, download the quantized GGML weights:
```bash
wget https://huggingface.co/kotoba-tech/kotoba-whisper-bilingual-v1.0-ggml/resolve/main/ggml-kotoba-whisper-bilingual-v1.0-q5_0.bin -P ./models
```
Run inference on the sample audio:
```bash
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0-q5_0.bin -l ja -f sample_ja_fixed.wav --output-file "output.transcribe.ja.q" --output-json
```
Note that the benchmark results are almost identical to the raw non-quantized model weight.
### Conversion details
The original model was converted with the following command:
```
# clone OpenAI whisper and whisper.cpp
git clone https://github.com/openai/whisper
git clone https://github.com/ggerganov/whisper.cpp
# get the models
cd whisper.cpp/models
git clone https://huggingface.co/kotoba-tech/kotoba-whisper-bilingual-v1.0
# convert to ggml
python3 ./convert-h5-to-ggml.py ./kotoba-whisper-bilingual-v1.0/ ../../whisper .
mv ggml-model.bin ggml-kotoba-whisper-bilingual-v1.0.bin
# quantize ggml model
cd ../
make quantize
./quantize models/ggml-kotoba-whisper-bilingual-v1.0.bin models/ggml-kotoba-whisper-bilingual-v1.0-q5_0.bin q5_0
```
## Model Details
For more information about the kotoba-whisper-v2.0, refer to the original [model card](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0).
| null |
Non_BioNLP
|
# Kotoba-Whisper-Bilingual: kotoba-whisper-bilingual-v1.0 for Whisper cpp
This repository contains the model weights for [kotoba-tech/kotoba-whisper-bilingual-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-bilingual-v1.0)
converted to [GGML](https://github.com/ggerganov/ggml) format. GGML is the weight format expected by C/C++ packages
such as [Whisper.cpp](https://github.com/ggerganov/whisper.cpp), for which we provide an example below.
## Usage
Kotoba-Whisper can be run with the [Whisper.cpp](https://github.com/ggerganov/whisper.cpp) package with the original
sequential long-form transcription algorithm.
Steps for getting started:
1. Clone the Whisper.cpp repository:
```
git clone https://github.com/ggerganov/whisper.cpp.git
cd whisper.cpp
make
```
2. Download the GGML weights for `kotoba-tech/kotoba-whisper-bilingual-v1.0`:
```bash
wget https://huggingface.co/kotoba-tech/kotoba-whisper-bilingual-v1.0-ggml/resolve/main/ggml-kotoba-whisper-bilingual-v1.0.bin -P ./models
```
3. Run inference using the provided sample audio:
- Download sample audio. Note that it runs only with 16-bit WAV files, so make sure to convert your input before running the tool. For example, you can use ffmpeg like below.
```bash
wget https://huggingface.co/datasets/japanese-asr/en_asr.esb_eval/resolve/main/sample.wav -O sample_en.wav
wget https://huggingface.co/datasets/japanese-asr/ja_asr.jsut_basic5000/resolve/main/sample.flac -O sample_ja.flac
ffmpeg -i sample_en.wav -ar 16000 -ac 1 -c:a pcm_s16le sample_en_fixed.wav
ffmpeg -i sample_ja.flac -ar 16000 -ac 1 -c:a pcm_s16le sample_ja_fixed.wav
```
- ***Japanese ASR***
```
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0.bin -l ja -f sample_ja_fixed.wav --output-file "output.transcribe.ja" --output-json
```
- ***English ASR***
```
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0.bin -l ja -f sample_en_fixed.wav --output-file "output.transcribe.en" --output-json
```
- ***Japanese (speech) to English (text) Translation***
```
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0.bin -tr -l en -f sample_ja_fixed.wav --output-file "output.translate.en" --output-json
```
- ***English (speech) to Japanese (text) Translation***
```
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0.bin -tr -l ja -f sample_en_fixed.wav --output-file "output.translate.ja" --output-json
```
### Benchmark
We measure the inference speed of different kotoba-whisper-v2.0 implementations with four different Japanese speech audio on MacBook Pro with the following spec:
- Apple M2 Pro
- 32GB
- 14-inch, 2023
- OS Sonoma Version 14.4.1 (23E224)
| audio file | audio duration (min)| [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-ggml) (sec) | [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-faster) (sec)| [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0) (sec)
|--------|------|-----|------|-----|
|audio 1 | 50.3 | 581 | 2601 | 807 |
|audio 2 | 5.6 | 41 | 73 | 61 |
|audio 3 | 4.9 | 30 | 141 | 54 |
|audio 4 | 5.6 | 35 | 126 | 69 |
Scripts to re-run the experiment can be found bellow:
* [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-ggml/blob/main/benchmark.sh)
* [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-faster/blob/main/benchmark.sh)
* [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0/blob/main/benchmark.sh)
Currently whisper.cpp and faster-whisper support the [sequential long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form),
and only Huggingface pipeline supports the [chunked long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#chunked-long-form), which we empirically
found better than the sequnential long-form decoding.
### Quantized Model
To use the quantized model, download the quantized GGML weights:
```bash
wget https://huggingface.co/kotoba-tech/kotoba-whisper-bilingual-v1.0-ggml/resolve/main/ggml-kotoba-whisper-bilingual-v1.0-q5_0.bin -P ./models
```
Run inference on the sample audio:
```bash
make -j && ./main -m models/ggml-kotoba-whisper-bilingual-v1.0-q5_0.bin -l ja -f sample_ja_fixed.wav --output-file "output.transcribe.ja.q" --output-json
```
Note that the benchmark results are almost identical to the raw non-quantized model weight.
### Conversion details
The original model was converted with the following command:
```
# clone OpenAI whisper and whisper.cpp
git clone https://github.com/openai/whisper
git clone https://github.com/ggerganov/whisper.cpp
# get the models
cd whisper.cpp/models
git clone https://huggingface.co/kotoba-tech/kotoba-whisper-bilingual-v1.0
# convert to ggml
python3 ./convert-h5-to-ggml.py ./kotoba-whisper-bilingual-v1.0/ ../../whisper .
mv ggml-model.bin ggml-kotoba-whisper-bilingual-v1.0.bin
# quantize ggml model
cd ../
make quantize
./quantize models/ggml-kotoba-whisper-bilingual-v1.0.bin models/ggml-kotoba-whisper-bilingual-v1.0-q5_0.bin q5_0
```
## Model Details
For more information about the kotoba-whisper-v2.0, refer to the original [model card](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0).
|
{"language": "ja", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition"]}
|
task
|
[
"TRANSLATION"
] | 45,262 |
predibase/conllpp
|
predibase
|
text-generation
|
[
"peft",
"safetensors",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | 2024-02-19T23:16:53Z |
2024-02-21T19:13:37+00:00
| 214 | 4 |
---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
pipeline_tag: text-generation
---
Description: Named entity recognition\
Original dataset: https://huggingface.co/datasets/conllpp \
---\
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \
The adapter_category is Named Entity Recognition and the name is Named Entity Recognition (CoNLL++)\
---\
Sample input: Your task is a Named Entity Recognition (NER) task. Predict the category of each entity, then place the entity into the list associated with the category in an output JSON payload. Below is an example:\nInput: EU rejects German call to boycott British lamb . Output: {"person": [], "organization": ["EU"], "location": [], "miscellaneous": ["German", "British"]}\nNow, complete the task.\nInput: By the close Yorkshire had turned that into a 37-run advantage but off-spinner Such had scuttled their hopes , taking four for 24 in 48 balls and leaving them hanging on 119 for five and praying for rain . Output: \
---\
Sample output: {"person": ["Such"], "organization": ["Yorkshire"], "location": [], "miscellaneous": []}\
---\
Try using this adapter yourself!
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "predibase/conllpp"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
```
| null |
Non_BioNLP
|
Description: Named entity recognition\
Original dataset: https://huggingface.co/datasets/conllpp \
---\
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \
The adapter_category is Named Entity Recognition and the name is Named Entity Recognition (CoNLL++)\
---\
Sample input: Your task is a Named Entity Recognition (NER) task. Predict the category of each entity, then place the entity into the list associated with the category in an output JSON payload. Below is an example:\nInput: EU rejects German call to boycott British lamb . Output: {"person": [], "organization": ["EU"], "location": [], "miscellaneous": ["German", "British"]}\nNow, complete the task.\nInput: By the close Yorkshire had turned that into a 37-run advantage but off-spinner Such had scuttled their hopes , taking four for 24 in 48 balls and leaving them hanging on 119 for five and praying for rain . Output: \
---\
Sample output: {"person": ["Such"], "organization": ["Yorkshire"], "location": [], "miscellaneous": []}\
---\
Try using this adapter yourself!
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "predibase/conllpp"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
```
|
{"base_model": "mistralai/Mistral-7B-v0.1", "library_name": "peft", "pipeline_tag": "text-generation"}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 45,263 |
yseop/roberta-base-finance-hypernym-identification
|
yseop
|
sentence-similarity
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2108.09485",
"arxiv:1908.10084",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-05-14T20:03:57+00:00
| 193 | 18 |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
inference: false
---
<div style="clear: both;">
<div style="float: left; margin-right 1em;">
<h1><strong>FinISH (Finance-Identifying Sroberta for Hypernyms)</strong></h1>
</div>
<div>
<h2><img src="https://pbs.twimg.com/profile_images/1333760924914753538/fQL4zLUw_400x400.png" alt="" width="25" height="25"></h2>
</div>
</div>
We present FinISH, a [SRoBERTa](https://huggingface.co/sentence-transformers/nli-roberta-base-v2) base model fine-tuned on the [FIBO ontology](https://spec.edmcouncil.org/fibo/) dataset for domain-specific representation learning on the [**Semantic Search**](https://www.sbert.net/examples/applications/semantic-search/README.html) downstream task.
The model is an implementation of the following paper: [Yseop at FinSim-3 Shared Task 2021: Specializing Financial Domain Learning with Phrase Representations](https://arxiv.org/abs/2108.09485)
## SRoBERTa Model Architecture
Sentence-RoBERTa (SRoBERTa) is a modification of the pretrained RoBERTa network that uses siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with RoBERTa to about 5 seconds with SRoBERTa, while maintaining the accuracy from RoBERTa. SRoBERTa has been evaluated on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.
Paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/pdf/1908.10084.pdf).
Authors: *Nils Reimers and Iryna Gurevych*.
## Details on the downstream task (Semantic Search for Text Classification)
The objective of this task is to correctly classify a given term in the financial domain according to its prototypical hypernym in a list of available hypernyms:
* Bonds
* Forward
* Funds
* Future
* MMIs (Money Market Instruments)
* Option
* Stocks
* Swap
* Equity Index
* Credit Index
* Securities restrictions
* Parametric schedules
* Debt pricing and yields
* Credit Events
* Stock Corporation
* Central Securities Depository
* Regulatory Agency
This kind-based approach relies on identifying the closest hypernym semantically to the given term (even if they possess common properties with other hypernyms).
#### Data Description
The data is a scraped list of term definitions from the FIBO ontology website where each definition has been mapped to its closest hypernym from the proposed labels.
For multi-sentence definitions, we applied sentence-splitting by punctuation delimiters. We also applied lowercase transformation on all input data.
#### Data Instances
The dataset contains a label representing the hypernym of the given definition.
```json
{
'label': 'bonds',
'definition': 'callable convertible bond is a kind of callable bond, convertible bond.'
}
```
#### Data Fields
**label**: Can be one of the 17 predefined hypernyms.
**definition**: Financial term definition relating to a concept or object in the financial domain.
#### Data Splits
The data contains training data with **317101** entries.
#### Test set metrics
The representational learning model is evaluated on a representative test set with 20% of the entries. The test set is scored based on the following metrics:
* Average Accuracy
* Mean Rank (position of the correct label in a set of 5 model predictions)
We evaluate FinISH according to these metrics, where it outperforms other state-of-the-art sentence embeddings methods in this task.
* Average Accuracy: **0.73**
* Mean Rank: **1.61**
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
git clone https://github.com/huggingface/transformers.git
pip install -q ./transformers
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
import torch
model = SentenceTransformer('yseop/roberta-base-finance-hypernym-identification')
# Our corpus containing the list of hypernym labels
hypernyms = ['Bonds',
\t\t\t'Forward',
\t\t\t'Funds',
\t\t\t'Future',
\t\t\t'MMIs',
\t\t\t'Option',
\t\t\t'Stocks',
\t\t\t'Swap',
\t\t\t'Equity Index',
\t\t\t'Credit Index',
\t\t\t'Securities restrictions',
\t\t\t'Parametric schedules',
\t\t\t'Debt pricing and yields',
\t\t\t'Credit Events',
\t\t\t'Stock Corporation',
\t\t\t'Central Securities Depository',
\t\t\t'Regulatory Agency']
hypernym_embeddings = model.encode(hypernyms, convert_to_tensor=True)
# Query sentences are financial terms to match to the predefined labels
queries = ['Convertible bond', 'weighted average coupon', 'Restriction 144-A']
# Find the closest 5 hypernyms of the corpus for each query sentence based on cosine similarity
top_k = min(5, len(hypernyms))
for query in queries:
query_embedding = model.encode(query, convert_to_tensor=True)
# We use cosine-similarity and torch.topk to find the highest 5 scores
cos_scores = util.pytorch_cos_sim(query_embedding, hypernym_embeddings)[0]
top_results = torch.topk(cos_scores, k=top_k)
print("\
\
======================\
\
")
print("Query:", query)
print("\
Top 5 most similar hypernyms:")
for score, idx in zip(top_results[0], top_results[1]):
print(hypernyms[idx], "(Score: {:.4f})".format(score))
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Query sentences are financial terms to match to the predefined labels
queries = ['Convertible bond', 'weighted average coupon', 'Restriction 144-A']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('yseop/roberta-base-finance-hypernym-identification')
model = AutoModel.from_pretrained('yseop/roberta-base-finance-hypernym-identification')
# Tokenize sentences
encoded_input = tokenizer(queries, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
query_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Query embeddings:")
print(query_embeddings)
```
**Created by:** [Yseop](https://www.yseop.com/) | Pioneer in Natural Language Generation (NLG) technology. Scaling human expertise through Natural Language Generation.
| null |
Non_BioNLP
|
<div style="clear: both;">
<div style="float: left; margin-right 1em;">
<h1><strong>FinISH (Finance-Identifying Sroberta for Hypernyms)</strong></h1>
</div>
<div>
<h2><img src="https://pbs.twimg.com/profile_images/1333760924914753538/fQL4zLUw_400x400.png" alt="" width="25" height="25"></h2>
</div>
</div>
We present FinISH, a [SRoBERTa](https://huggingface.co/sentence-transformers/nli-roberta-base-v2) base model fine-tuned on the [FIBO ontology](https://spec.edmcouncil.org/fibo/) dataset for domain-specific representation learning on the [**Semantic Search**](https://www.sbert.net/examples/applications/semantic-search/README.html) downstream task.
The model is an implementation of the following paper: [Yseop at FinSim-3 Shared Task 2021: Specializing Financial Domain Learning with Phrase Representations](https://arxiv.org/abs/2108.09485)
## SRoBERTa Model Architecture
Sentence-RoBERTa (SRoBERTa) is a modification of the pretrained RoBERTa network that uses siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with RoBERTa to about 5 seconds with SRoBERTa, while maintaining the accuracy from RoBERTa. SRoBERTa has been evaluated on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.
Paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/pdf/1908.10084.pdf).
Authors: *Nils Reimers and Iryna Gurevych*.
## Details on the downstream task (Semantic Search for Text Classification)
The objective of this task is to correctly classify a given term in the financial domain according to its prototypical hypernym in a list of available hypernyms:
* Bonds
* Forward
* Funds
* Future
* MMIs (Money Market Instruments)
* Option
* Stocks
* Swap
* Equity Index
* Credit Index
* Securities restrictions
* Parametric schedules
* Debt pricing and yields
* Credit Events
* Stock Corporation
* Central Securities Depository
* Regulatory Agency
This kind-based approach relies on identifying the closest hypernym semantically to the given term (even if they possess common properties with other hypernyms).
#### Data Description
The data is a scraped list of term definitions from the FIBO ontology website where each definition has been mapped to its closest hypernym from the proposed labels.
For multi-sentence definitions, we applied sentence-splitting by punctuation delimiters. We also applied lowercase transformation on all input data.
#### Data Instances
The dataset contains a label representing the hypernym of the given definition.
```json
{
'label': 'bonds',
'definition': 'callable convertible bond is a kind of callable bond, convertible bond.'
}
```
#### Data Fields
**label**: Can be one of the 17 predefined hypernyms.
**definition**: Financial term definition relating to a concept or object in the financial domain.
#### Data Splits
The data contains training data with **317101** entries.
#### Test set metrics
The representational learning model is evaluated on a representative test set with 20% of the entries. The test set is scored based on the following metrics:
* Average Accuracy
* Mean Rank (position of the correct label in a set of 5 model predictions)
We evaluate FinISH according to these metrics, where it outperforms other state-of-the-art sentence embeddings methods in this task.
* Average Accuracy: **0.73**
* Mean Rank: **1.61**
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
git clone https://github.com/huggingface/transformers.git
pip install -q ./transformers
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
import torch
model = SentenceTransformer('yseop/roberta-base-finance-hypernym-identification')
# Our corpus containing the list of hypernym labels
hypernyms = ['Bonds',
\t\t\t'Forward',
\t\t\t'Funds',
\t\t\t'Future',
\t\t\t'MMIs',
\t\t\t'Option',
\t\t\t'Stocks',
\t\t\t'Swap',
\t\t\t'Equity Index',
\t\t\t'Credit Index',
\t\t\t'Securities restrictions',
\t\t\t'Parametric schedules',
\t\t\t'Debt pricing and yields',
\t\t\t'Credit Events',
\t\t\t'Stock Corporation',
\t\t\t'Central Securities Depository',
\t\t\t'Regulatory Agency']
hypernym_embeddings = model.encode(hypernyms, convert_to_tensor=True)
# Query sentences are financial terms to match to the predefined labels
queries = ['Convertible bond', 'weighted average coupon', 'Restriction 144-A']
# Find the closest 5 hypernyms of the corpus for each query sentence based on cosine similarity
top_k = min(5, len(hypernyms))
for query in queries:
query_embedding = model.encode(query, convert_to_tensor=True)
# We use cosine-similarity and torch.topk to find the highest 5 scores
cos_scores = util.pytorch_cos_sim(query_embedding, hypernym_embeddings)[0]
top_results = torch.topk(cos_scores, k=top_k)
print("\
\
======================\
\
")
print("Query:", query)
print("\
Top 5 most similar hypernyms:")
for score, idx in zip(top_results[0], top_results[1]):
print(hypernyms[idx], "(Score: {:.4f})".format(score))
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Query sentences are financial terms to match to the predefined labels
queries = ['Convertible bond', 'weighted average coupon', 'Restriction 144-A']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('yseop/roberta-base-finance-hypernym-identification')
model = AutoModel.from_pretrained('yseop/roberta-base-finance-hypernym-identification')
# Tokenize sentences
encoded_input = tokenizer(queries, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
query_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Query embeddings:")
print(query_embeddings)
```
**Created by:** [Yseop](https://www.yseop.com/) | Pioneer in Natural Language Generation (NLG) technology. Scaling human expertise through Natural Language Generation.
|
{"pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "inference": false}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,264 |
TechxGenus/gemma-1.1-7b-it-GPTQ
|
TechxGenus
|
text-generation
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | 2024-04-08T07:45:16Z |
2024-04-08T08:31:22+00:00
| 6 | 1 |
---
library_name: transformers
license: gemma
widget:
- messages:
- role: user
content: How does the brain work?
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
GPTQ quantized version of gemma-1.1-7b-it model.
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the latest 7B instruct version of the Gemma model. Here you can find other models in the Gemma family:
| | Base | Instruct |
|----|----------------------------------------------------|----------------------------------------------------------------------|
| 2B | [gemma-2b](https://huggingface.co/google/gemma-2b) | [gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) |
| 7B | [gemma-7b](https://huggingface.co/google/gemma-7b) | [**gemma-1.1-7b-it**](https://huggingface.co/google/gemma-1.1-7b-it) |
**Release Notes**
This is Gemma 1.1 7B (IT), an update over the original instruction-tuned Gemma release.
Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with `"Sure,"`.
We believe this release represents an improvement for most use cases, but we encourage users to test in their particular applications. The previous model [will continue to be available in the same repo](https://huggingface.co/google/gemma-7b-it). We appreciate the enthusiastic adoption of Gemma, and we continue to welcome all feedback from the community.
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
#### Running the model in JAX / Flax
Use the `flax` branch of the repository:
```python
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxGemmaForCausalLM
model_id = "google/gemma-1.1-7b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.padding_side = "left"
model, params = FlaxGemmaForCausalLM.from_pretrained(
model_id,
dtype=jnp.bfloat16,
revision="flax",
_do_init=False,
)
inputs = tokenizer("Valencia and Málaga are", return_tensors="np", padding=True)
output = model.generate(**inputs, params=params, max_new_tokens=20, do_sample=False)
output_text = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)
```
[Check this notebook](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/jax_gemma.ipynb) for a comprehensive walkthrough on how to parallelize JAX inference.
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-1.1-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Fine-tuning
You can find some fine-tuning scripts under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt them to this model, simply change the model-id to `google/gemma-1.1-7b-it`.
We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on the English quotes dataset
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
The pre-trained base models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 2B | Gemma PT 7B |
| ------------------------------ | ------------- | ----------- | ----------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 |
| [BoolQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23.0 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | ----------- |
| **Average** | | **44.9** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 1.0
| Benchmark | Metric | Gemma 1.0 IT 2B | Gemma 1.0 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 6.86 | 7.90 |
| [BOLD][bold] | | 45.57 | 49.08 |
| [CrowS-Pairs][crows] | top-1 | 45.82 | 51.33 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig][bbq] | top-1 | 54.62 | 71.99 |
| [Winogender][winogender] | top-1 | 51.25 | 54.17 |
| [TruthfulQA][truthfulqa] | | 44.84 | 31.81 |
| [Winobias 1_2][winobias] | | 56.12 | 59.09 |
| [Winobias 2_2][winobias] | | 91.10 | 92.23 |
| [Toxigen][toxigen] | | 29.77 | 39.59 |
| ------------------------ | ------------- | --------------- | --------------- |
#### Gemma 1.1
| Benchmark | Metric | Gemma 1.1 IT 2B | Gemma 1.1 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 7.03 | 8.04 |
| [BOLD][bold] | | 47.76 | |
| [CrowS-Pairs][crows] | top-1 | 45.89 | 49.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 58.97 | 86.06 |
| [BBQ Disambig][bbq] | top-1 | 53.90 | 85.08 |
| [Winogender][winogender] | top-1 | 50.14 | 57.64 |
| [TruthfulQA][truthfulqa] | | 44.24 | 45.34 |
| [Winobias 1_2][winobias] | | 55.93 | 59.22 |
| [Winobias 2_2][winobias] | | 89.46 | 89.2 |
| [Toxigen][toxigen] | | 29.64 | 38.75 |
| ------------------------ | ------------- | --------------- | --------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
| null |
Non_BioNLP
|
GPTQ quantized version of gemma-1.1-7b-it model.
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the latest 7B instruct version of the Gemma model. Here you can find other models in the Gemma family:
| | Base | Instruct |
|----|----------------------------------------------------|----------------------------------------------------------------------|
| 2B | [gemma-2b](https://huggingface.co/google/gemma-2b) | [gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) |
| 7B | [gemma-7b](https://huggingface.co/google/gemma-7b) | [**gemma-1.1-7b-it**](https://huggingface.co/google/gemma-1.1-7b-it) |
**Release Notes**
This is Gemma 1.1 7B (IT), an update over the original instruction-tuned Gemma release.
Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with `"Sure,"`.
We believe this release represents an improvement for most use cases, but we encourage users to test in their particular applications. The previous model [will continue to be available in the same repo](https://huggingface.co/google/gemma-7b-it). We appreciate the enthusiastic adoption of Gemma, and we continue to welcome all feedback from the community.
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
#### Running the model in JAX / Flax
Use the `flax` branch of the repository:
```python
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxGemmaForCausalLM
model_id = "google/gemma-1.1-7b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.padding_side = "left"
model, params = FlaxGemmaForCausalLM.from_pretrained(
model_id,
dtype=jnp.bfloat16,
revision="flax",
_do_init=False,
)
inputs = tokenizer("Valencia and Málaga are", return_tensors="np", padding=True)
output = model.generate(**inputs, params=params, max_new_tokens=20, do_sample=False)
output_text = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)
```
[Check this notebook](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/jax_gemma.ipynb) for a comprehensive walkthrough on how to parallelize JAX inference.
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-1.1-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Fine-tuning
You can find some fine-tuning scripts under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt them to this model, simply change the model-id to `google/gemma-1.1-7b-it`.
We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on the English quotes dataset
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
The pre-trained base models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 2B | Gemma PT 7B |
| ------------------------------ | ------------- | ----------- | ----------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 |
| [BoolQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23.0 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | ----------- |
| **Average** | | **44.9** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 1.0
| Benchmark | Metric | Gemma 1.0 IT 2B | Gemma 1.0 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 6.86 | 7.90 |
| [BOLD][bold] | | 45.57 | 49.08 |
| [CrowS-Pairs][crows] | top-1 | 45.82 | 51.33 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig][bbq] | top-1 | 54.62 | 71.99 |
| [Winogender][winogender] | top-1 | 51.25 | 54.17 |
| [TruthfulQA][truthfulqa] | | 44.84 | 31.81 |
| [Winobias 1_2][winobias] | | 56.12 | 59.09 |
| [Winobias 2_2][winobias] | | 91.10 | 92.23 |
| [Toxigen][toxigen] | | 29.77 | 39.59 |
| ------------------------ | ------------- | --------------- | --------------- |
#### Gemma 1.1
| Benchmark | Metric | Gemma 1.1 IT 2B | Gemma 1.1 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 7.03 | 8.04 |
| [BOLD][bold] | | 47.76 | |
| [CrowS-Pairs][crows] | top-1 | 45.89 | 49.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 58.97 | 86.06 |
| [BBQ Disambig][bbq] | top-1 | 53.90 | 85.08 |
| [Winogender][winogender] | top-1 | 50.14 | 57.64 |
| [TruthfulQA][truthfulqa] | | 44.24 | 45.34 |
| [Winobias 1_2][winobias] | | 55.93 | 59.22 |
| [Winobias 2_2][winobias] | | 89.46 | 89.2 |
| [Toxigen][toxigen] | | 29.64 | 38.75 |
| ------------------------ | ------------- | --------------- | --------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
{"library_name": "transformers", "license": "gemma", "widget": [{"messages": [{"role": "user", "content": "How does the brain work?"}]}], "inference": {"parameters": {"max_new_tokens": 200}}, "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,265 |
philschmid/distilbert-base-multilingual-cased-sentiment-2
|
philschmid
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-01-24T15:08:50+00:00
| 2,313 | 4 |
---
datasets:
- amazon_reviews_multi
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-multilingual-cased-sentiment-2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: all_languages
metrics:
- type: accuracy
value: 0.7475666666666667
name: Accuracy
- type: f1
value: 0.7475666666666667
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6067
- Accuracy: 0.7476
- F1: 0.7476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.6885 | 0.53 | 5000 | 0.6532 | 0.7217 | 0.7217 |
| 0.6411 | 1.07 | 10000 | 0.6348 | 0.7319 | 0.7319 |
| 0.6057 | 1.6 | 15000 | 0.6186 | 0.7387 | 0.7387 |
| 0.5844 | 2.13 | 20000 | 0.6236 | 0.7449 | 0.7449 |
| 0.549 | 2.67 | 25000 | 0.6067 | 0.7476 | 0.7476 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6067
- Accuracy: 0.7476
- F1: 0.7476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.6885 | 0.53 | 5000 | 0.6532 | 0.7217 | 0.7217 |
| 0.6411 | 1.07 | 10000 | 0.6348 | 0.7319 | 0.7319 |
| 0.6057 | 1.6 | 15000 | 0.6186 | 0.7387 | 0.7387 |
| 0.5844 | 2.13 | 20000 | 0.6236 | 0.7449 | 0.7449 |
| 0.549 | 2.67 | 25000 | 0.6067 | 0.7476 | 0.7476 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"datasets": ["amazon_reviews_multi"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-multilingual-cased-sentiment-2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "all_languages"}, "metrics": [{"type": "accuracy", "value": 0.7475666666666667, "name": "Accuracy"}, {"type": "f1", "value": 0.7475666666666667, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,266 |
ninyx/Mistral-7B-Instruct-v0.2-advisegpt-v0.6
|
ninyx
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | 2024-05-25T13:45:55Z |
2024-05-26T19:10:45+00:00
| 6 | 0 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
library_name: peft
license: apache-2.0
metrics:
- bleu
- rouge
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-Instruct-v0.2-advisegpt-v0.6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2-advisegpt-v0.6
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0767
- Bleu: {'bleu': 0.9584832765902116, 'precisions': [0.9778312591422885, 0.9625878953932084, 0.9518774970032065, 0.9430684559898991], 'brevity_penalty': 0.9997177244264667, 'length_ratio': 0.9997177642587203, 'translation_length': 1289338, 'reference_length': 1289702}
- Rouge: {'rouge1': 0.9761023152523122, 'rouge2': 0.9590922549283836, 'rougeL': 0.9747297976860183, 'rougeLsum': 0.9758442544146716}
- Exact Match: {'exact_match': 0.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Exact Match |
|:-------------:|:------:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------:|:--------------------:|
| 0.067 | 0.9998 | 809 | 0.0945 | {'bleu': 0.9492918853166353, 'precisions': [0.9733554685833311, 0.9543042005762523, 0.9412361771045687, 0.9307382413966919], 'brevity_penalty': 0.9994904502180469, 'length_ratio': 0.9994905799944483, 'translation_length': 1289045, 'reference_length': 1289702} | {'rouge1': 0.9712558044405124, 'rouge2': 0.9500703853191179, 'rougeL': 0.9690578078497468, 'rougeLsum': 0.9708044674114953} | {'exact_match': 0.0} |
| 0.0527 | 1.9995 | 1618 | 0.0779 | {'bleu': 0.9568445996007577, 'precisions': [0.977026202258449, 0.961055539100332, 0.9498195483213825, 0.9405540074014527], 'brevity_penalty': 0.9998193217903225, 'length_ratio': 0.9998193381106644, 'translation_length': 1289469, 'reference_length': 1289702} | {'rouge1': 0.9753094821779227, 'rouge2': 0.9574822736836266, 'rougeL': 0.9737984768450723, 'rougeLsum': 0.9750220632065946} | {'exact_match': 0.0} |
| 0.0471 | 2.9993 | 2427 | 0.0767 | {'bleu': 0.9584832765902116, 'precisions': [0.9778312591422885, 0.9625878953932084, 0.9518774970032065, 0.9430684559898991], 'brevity_penalty': 0.9997177244264667, 'length_ratio': 0.9997177642587203, 'translation_length': 1289338, 'reference_length': 1289702} | {'rouge1': 0.9761023152523122, 'rouge2': 0.9590922549283836, 'rougeL': 0.9747297976860183, 'rougeLsum': 0.9758442544146716} | {'exact_match': 0.0} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2-advisegpt-v0.6
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0767
- Bleu: {'bleu': 0.9584832765902116, 'precisions': [0.9778312591422885, 0.9625878953932084, 0.9518774970032065, 0.9430684559898991], 'brevity_penalty': 0.9997177244264667, 'length_ratio': 0.9997177642587203, 'translation_length': 1289338, 'reference_length': 1289702}
- Rouge: {'rouge1': 0.9761023152523122, 'rouge2': 0.9590922549283836, 'rougeL': 0.9747297976860183, 'rougeLsum': 0.9758442544146716}
- Exact Match: {'exact_match': 0.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Exact Match |
|:-------------:|:------:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------:|:--------------------:|
| 0.067 | 0.9998 | 809 | 0.0945 | {'bleu': 0.9492918853166353, 'precisions': [0.9733554685833311, 0.9543042005762523, 0.9412361771045687, 0.9307382413966919], 'brevity_penalty': 0.9994904502180469, 'length_ratio': 0.9994905799944483, 'translation_length': 1289045, 'reference_length': 1289702} | {'rouge1': 0.9712558044405124, 'rouge2': 0.9500703853191179, 'rougeL': 0.9690578078497468, 'rougeLsum': 0.9708044674114953} | {'exact_match': 0.0} |
| 0.0527 | 1.9995 | 1618 | 0.0779 | {'bleu': 0.9568445996007577, 'precisions': [0.977026202258449, 0.961055539100332, 0.9498195483213825, 0.9405540074014527], 'brevity_penalty': 0.9998193217903225, 'length_ratio': 0.9998193381106644, 'translation_length': 1289469, 'reference_length': 1289702} | {'rouge1': 0.9753094821779227, 'rouge2': 0.9574822736836266, 'rougeL': 0.9737984768450723, 'rougeLsum': 0.9750220632065946} | {'exact_match': 0.0} |
| 0.0471 | 2.9993 | 2427 | 0.0767 | {'bleu': 0.9584832765902116, 'precisions': [0.9778312591422885, 0.9625878953932084, 0.9518774970032065, 0.9430684559898991], 'brevity_penalty': 0.9997177244264667, 'length_ratio': 0.9997177642587203, 'translation_length': 1289338, 'reference_length': 1289702} | {'rouge1': 0.9761023152523122, 'rouge2': 0.9590922549283836, 'rougeL': 0.9747297976860183, 'rougeLsum': 0.9758442544146716} | {'exact_match': 0.0} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "datasets": ["generator"], "library_name": "peft", "license": "apache-2.0", "metrics": ["bleu", "rouge"], "tags": ["trl", "sft", "generated_from_trainer"], "model-index": [{"name": "Mistral-7B-Instruct-v0.2-advisegpt-v0.6", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,267 |
nagayama0706/translation_model
|
nagayama0706
|
translation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"google-bert/bert-base-uncased",
"rifkat/uztext-3Gb-BPE-Roberta",
"translation",
"base_model:google-bert/bert-base-uncased",
"base_model:merge:google-bert/bert-base-uncased",
"base_model:rifkat/uztext-3Gb-BPE-Roberta",
"base_model:merge:rifkat/uztext-3Gb-BPE-Roberta",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-15T08:23:50Z |
2024-04-16T05:13:53+00:00
| 5 | 0 |
---
base_model:
- google-bert/bert-base-uncased
- rifkat/uztext-3Gb-BPE-Roberta
license: apache-2.0
pipeline_tag: translation
tags:
- merge
- mergekit
- lazymergekit
- google-bert/bert-base-uncased
- rifkat/uztext-3Gb-BPE-Roberta
---
# translation_model
translation_model is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)
* [rifkat/uztext-3Gb-BPE-Roberta](https://huggingface.co/rifkat/uztext-3Gb-BPE-Roberta)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: google-bert/bert-base-uncased
layer_range: [0, 32]
- model: rifkat/uztext-3Gb-BPE-Roberta
layer_range: [0, 32]
merge_method: slerp
base_model: google-bert/bert-base-uncased
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nagayama0706/translation_model"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
| null |
Non_BioNLP
|
# translation_model
translation_model is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)
* [rifkat/uztext-3Gb-BPE-Roberta](https://huggingface.co/rifkat/uztext-3Gb-BPE-Roberta)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: google-bert/bert-base-uncased
layer_range: [0, 32]
- model: rifkat/uztext-3Gb-BPE-Roberta
layer_range: [0, 32]
merge_method: slerp
base_model: google-bert/bert-base-uncased
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nagayama0706/translation_model"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"base_model": ["google-bert/bert-base-uncased", "rifkat/uztext-3Gb-BPE-Roberta"], "license": "apache-2.0", "pipeline_tag": "translation", "tags": ["merge", "mergekit", "lazymergekit", "google-bert/bert-base-uncased", "rifkat/uztext-3Gb-BPE-Roberta"]}
|
task
|
[
"TRANSLATION"
] | 45,268 |
TransferGraph/CAMeL-Lab_bert-base-arabic-camelbert-da-sentiment-finetuned-lora-tweet_eval_irony
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment",
"base_model:adapter:CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-27T17:31:30Z |
2024-02-27T17:31:33+00:00
| 30 | 0 |
---
base_model: CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: CAMeL-Lab_bert-base-arabic-camelbert-da-sentiment-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.5528795811518324
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CAMeL-Lab_bert-base-arabic-camelbert-da-sentiment-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5393 | None | 0 |
| 0.5476 | 0.6958 | 0 |
| 0.5654 | 0.6874 | 1 |
| 0.5749 | 0.6757 | 2 |
| 0.5623 | 0.6645 | 3 |
| 0.5644 | 0.6536 | 4 |
| 0.5623 | 0.6511 | 5 |
| 0.5581 | 0.6509 | 6 |
| 0.5529 | 0.6454 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CAMeL-Lab_bert-base-arabic-camelbert-da-sentiment-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5393 | None | 0 |
| 0.5476 | 0.6958 | 0 |
| 0.5654 | 0.6874 | 1 |
| 0.5749 | 0.6757 | 2 |
| 0.5623 | 0.6645 | 3 |
| 0.5644 | 0.6536 | 4 |
| 0.5623 | 0.6511 | 5 |
| 0.5581 | 0.6509 | 6 |
| 0.5529 | 0.6454 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "CAMeL-Lab_bert-base-arabic-camelbert-da-sentiment-finetuned-lora-tweet_eval_irony", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "irony", "split": "validation", "args": "irony"}, "metrics": [{"type": "accuracy", "value": 0.5528795811518324, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,269 |
TransferGraph/emrecan_bert-base-multilingual-cased-snli_tr-finetuned-lora-tweet_eval_hate
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:emrecan/bert-base-multilingual-cased-snli_tr",
"base_model:adapter:emrecan/bert-base-multilingual-cased-snli_tr",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-29T13:52:28Z |
2024-02-29T13:52:33+00:00
| 1 | 0 |
---
base_model: emrecan/bert-base-multilingual-cased-snli_tr
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: emrecan_bert-base-multilingual-cased-snli_tr-finetuned-lora-tweet_eval_hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: validation
args: hate
metrics:
- type: accuracy
value: 0.711
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emrecan_bert-base-multilingual-cased-snli_tr-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [emrecan/bert-base-multilingual-cased-snli_tr](https://huggingface.co/emrecan/bert-base-multilingual-cased-snli_tr) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.432 | None | 0 |
| 0.679 | 0.6121 | 0 |
| 0.705 | 0.5216 | 1 |
| 0.707 | 0.4787 | 2 |
| 0.711 | 0.4623 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emrecan_bert-base-multilingual-cased-snli_tr-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [emrecan/bert-base-multilingual-cased-snli_tr](https://huggingface.co/emrecan/bert-base-multilingual-cased-snli_tr) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.432 | None | 0 |
| 0.679 | 0.6121 | 0 |
| 0.705 | 0.5216 | 1 |
| 0.707 | 0.4787 | 2 |
| 0.711 | 0.4623 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "emrecan/bert-base-multilingual-cased-snli_tr", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "emrecan_bert-base-multilingual-cased-snli_tr-finetuned-lora-tweet_eval_hate", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "hate", "split": "validation", "args": "hate"}, "metrics": [{"type": "accuracy", "value": 0.711, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,270 |
jinghan/albert-base-v2-finetuned-wnli
|
jinghan
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-02T00:16:41Z |
2022-08-02T00:33:36+00:00
| 17 | 0 |
---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: albert-base-v2-finetuned-wnli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: wnli
split: train
args: wnli
metrics:
- type: accuracy
value: 0.5633802816901409
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-wnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6981
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 0.6954 | 0.4930 |
| No log | 2.0 | 20 | 0.6981 | 0.5634 |
| No log | 3.0 | 30 | 0.7036 | 0.4225 |
| No log | 4.0 | 40 | 0.7062 | 0.3944 |
| No log | 5.0 | 50 | 0.7035 | 0.4225 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-wnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6981
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 0.6954 | 0.4930 |
| No log | 2.0 | 20 | 0.6981 | 0.5634 |
| No log | 3.0 | 30 | 0.7036 | 0.4225 |
| No log | 4.0 | 40 | 0.7062 | 0.3944 |
| No log | 5.0 | 50 | 0.7035 | 0.4225 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "albert-base-v2-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "wnli", "split": "train", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,271 |
fine-tuned/jinaai_jina-embeddings-v2-base-en-15072024-5xy1-webapp
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Academic",
"Research",
"Papers",
"Information",
"System",
"custom_code",
"en",
"dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-15072024-5xy1-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-15T19:59:48Z |
2024-07-15T20:00:06+00:00
| 78 | 0 |
---
datasets:
- fine-tuned/jinaai_jina-embeddings-v2-base-en-15072024-5xy1-webapp
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Academic
- Research
- Papers
- Information
- System
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
information retrieval system for academic research papers
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jinaai_jina-embeddings-v2-base-en-15072024-5xy1-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| null |
Non_BioNLP
|
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
information retrieval system for academic research papers
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jinaai_jina-embeddings-v2-base-en-15072024-5xy1-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
{"datasets": ["fine-tuned/jinaai_jina-embeddings-v2-base-en-15072024-5xy1-webapp", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Academic", "Research", "Papers", "Information", "System"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,272 |
anhtuansh/gte-multilingual-base-Matryoshka-4e-9k
|
anhtuansh
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:8259",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:anhtuansh/gte-multilingual-base-Matryoshka-3e-9k",
"base_model:finetune:anhtuansh/gte-multilingual-base-Matryoshka-3e-9k",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-24T08:18:04Z |
2024-12-24T08:18:39+00:00
| 7 | 0 |
---
base_model: anhtuansh/gte-multilingual-base-Matryoshka-3e-9k
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:8259
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'theo quy_định tại điều 35 nghị_định số 201 / 2013 / nđ - cp thì
thời_hạn giải_quyết thủ_tục hành_chính về cấp giấy_phép thăm_dò , khai_thác nước
dưới đất như sau : 1 . tiếp_nhận và kiểm_tra hồ_sơ : trong thời_hạn mười ( 10
) ngày làm_việc , kể từ ngày nhận hồ_sơ , cơ_quan tiếp_nhận hồ_sơ có trách_nhiệm
xem_xét , kiểm_tra hồ_sơ . trường_hợp hồ_sơ không hợp_lệ , cơ_quan tiếp_nhận hồ_sơ
thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép để bổ_sung , hoàn_thiện hồ_sơ
theo quy_định . trường_hợp hồ_sơ sau khi đã bổ_sung mà vẫn không đáp_ứng yêu_cầu
theo quy_định thì cơ_quan tiếp_nhận hồ_sơ trả lại hồ_sơ và thông_báo rõ lý_do
cho tổ_chức , cá_nhân đề_nghị cấp phép . 2 . thẩm_định_đề_án , báo_cáo thăm_dò
, khai_thác , sử_dụng tài_nguyên nước , xả nước_thải vào nguồn nước trong hồ_sơ
đề_nghị cấp phép ( sau đây gọi chung là đề_án , báo_cáo ) : a ) trong thời_hạn
ba_mươi ( 30 ) ngày làm_việc , kể từ ngày nhận đủ hồ_sơ hợp_lệ theo quy_định tại
khoản 1 điều này , cơ_quan tiếp_nhận hồ_sơ có trách_nhiệm thẩm_định_đề_án , báo_cáo
; nếu cần_thiết thì kiểm_tra thực_tế hiện_trường , lập hội_đồng thẩm_định_đề_án
, báo_cáo . trường_hợp đủ điều_kiện cấp phép , cơ_quan tiếp_nhận hồ_sơ trình cơ_quan
có thẩm_quyền cấp giấy_phép ; trường_hợp không đủ điều_kiện để cấp phép thì trả
lại hồ_sơ cho tổ_chức , cá_nhân đề_nghị cấp phép và thông_báo lý_do không cấp
phép ; b ) trường_hợp phải bổ_sung , chỉnh_sửa để hoàn_thiện đề_án , báo_cáo thì
cơ_quan tiếp_nhận hồ_sơ gửi văn_bản thông_báo cho tổ_chức , cá_nhân đề_nghị cấp
phép nêu rõ những nội_dung cần bổ_sung , hoàn_thiện đề_án , báo_cáo . thời_gian
bổ_sung , hoàn_thiện hoặc lập lại đề_án , báo_cáo không tính vào thời_gian thẩm_định_đề_án
, báo_cáo . thời_gian thẩm_định sau khi đề_án , báo_cáo được bổ_sung hoàn_chỉnh
là hai mươi ( 20 ) ngày làm_việc ; c ) trường_hợp phải lập lại đề_án , báo_cáo
, cơ_quan tiếp_nhận hồ_sơ gửi văn_bản thông_báo cho tổ_chức , cá_nhân đề_nghị
cấp phép nêu rõ những nội_dung đề_án , báo_cáo chưa đạt yêu_cầu , phải làm lại
và trả lại hồ_sơ đề_nghị cấp phép . 3 . trả kết_quả giải_quyết hồ_sơ_cấp phéptrong
thời_hạn năm ( 05 ) ngày làm_việc , kể từ ngày nhận được giấy_phép của cơ_quan
có thẩm_quyền , cơ_quan tiếp_nhận hồ_sơ thông_báo cho tổ_chức , cá_nhân đề_nghị
cấp phép để thực_hiện nghĩa_vụ tài_chính và nhận giấy_phép .'
sentences:
- ai có thẩm_quyền giải_quyết tố_cáo hành_vi vi_phạm_pháp_luật trong thực_hiện nhiệm_vụ
, công_vụ của cán_bộ , công_chức , viên_chức ?
- thời_hạn giải_quyết thủ_tục hành_chính về cấp giấy_phép thăm_dò , giấy_phép khai_thác
nước dưới đất ?
- tôi có_thể đăng_ký ngành , nghề kinh_doanh không có trong hệ_thống ngành kinh_tế
việt_nam không ?
- source_sentence: khoản 2 điều 2 thông_tư 30 quy_định , đoàn khám bệnh , chữa bệnh_nhân_đạo
là một nhóm nhân_viên y_tế trong nước , nước_ngoài do cá_nhân , tổ_chức trong
nước , nước_ngoài tổ_chức để khám bệnh , chữa bệnh_nhân_đạo cho nhân_dân .
sentences:
- tàu_thủy lưu_trú du_lịch có tiêu_chí xếp_hạng bắt_buộc không ?
- tôi muốn xin cấp lại sổ thuyền_viên thì cần những hồ_sơ gì ?
- đoàn khám bệnh , chữa bệnh_nhân_đạo là gì ?
- source_sentence: 'thành_phần hồ_sơ thực_hiện tthc sửa_đổi , bổ_sung / cấp lại giấy
chứng_nhận lưu_hành tự_do ( cfs ) đối_với hàng_hóa xuất_khẩu thuộc phạm_vi quản_lý
của bộ nông_nghiệp và phát_triển nông_thôn ( quy_định tại quyết_định số 1312 /
qđ - bnn - qlcl ngày 22 / 4 / 2019 về việc công_bố thủ_tục hành_chính được thay_thế
, tthc bị bãi_bỏ lĩnh_vực quản_lý chất_lượng nông_lâm_sản và thủy_sản thuộc phạm_vi
chức_năng quản_lý của bộ nông_nghiệp và phát_triển nông_thôn ) : - văn_bản đề_nghị
sửa_đổi , bổ_sung / cấp lại cfs của thương_nhân : 01 bản_chính . - các giấy_tờ
liên_quan đến việc sửa_đổi , bổ_sung / cấp lại cfs.'
sentences:
- kính gửi cục quản_lý chất_lượng nông_lâm_sản và thuỷ_sản , công_ty tôi đã được
quý cục cấp giấy chứng_nhận lưu_hành tự_do cfs , nay tôi muốn sửa_đổi một_số thông_tin
trên giấy cfs , vậy hồ_sơ đề_nghị sửa_đổi cần những gì ?
- thời_gian để được cấp quyết_định chỉ_định tổ_chức đánh_giá sự phù_hợp lĩnh_vực
sản_phẩm , hàng_hóa_vật_liệu xây_dựng đối_với trường_hợp thay_đổi , bổ_sung phạm_vi
, lĩnh_vực được chỉ_định
- hồ_sơ đăng_ký dự_tuyển lao_động của người lao_động gồm những gì ?
- source_sentence: 'hồ_sơ đề_nghị gia_hạn giấy_phép cung_cấp dịch_vụ trò_chơi điện_tử
g1 trên mạng bao_gồm : - đơn đề_nghị gia_hạn giấy_phép bao_gồm : tên , địa_chỉ
của doanh_nghiệp ; số , nơi cấp , ngày cấp giấy chứng_nhận đăng_ký doanh_nghiệp
; số , ngày cấp , ngày hiệu_lực của giấy_phép đã được cấp ; lý_do đề_nghị gia_hạn
giấy_phép ; cam_kết của doanh_nghiệp về các nội_dung kê_khai trong đơn đề_nghị
.'
sentences:
- thành_phần hồ_sơ
- nhà đầu_tư phải nộp bao_nhiêu bộ hồ_sơ đề_nghị thủ_tướng chính_phủ cho phép đầu_tư
gián_tiếp ra nước_ngoài đối_với các trường_hợp đầu_tư khác quy_định tại điều 9
nghị_định 135 / 2015 / nđ - cp ?
- phần vốn của thành_viên chưa góp vốn trong thời_hạn 90 ngày , kể từ ngày được
cấp giấy chứng_nhận đăng_ký doanh_nghiệp trong công_ty tnhh hai thành_viên trở
lên được xử_lý như thế_nào ? trường_hợp có người nhận mua lại phần vốn chưa góp
này thì công_ty có phải đăng_ký thay_đổi thành_viên với cơ_quan đăng_ký kinh_doanh
không ? hồ_sơ , thủ_tục được quy_định như thế_nào ?
- source_sentence: quản_lý nhập_khẩu hàng_hóa bị điều_tra , phục_vụ công_tác điều_tra
, áp_dụng biện_pháp phòng_vệ thương_mại .
sentences:
- mục_đích của thủ_tục khai_báo là gì ?
- đơn_vị chúng_tôi đã được cấp chứng_chỉ năng_lực hoạt_động xây_dựng nhưng hiện_nay
chúng_tôi thay_đổi người đại_diện pháp_luật của đơn_vị . vậy chúng_tôi có phải
làm thủ_tục nào để thay_đổi người đại_diện theo pháp_luật của chúng_tôi trên chứng_chỉ
?
- trường_hợp nào thì được cấp lại giấy xác_nhận đăng_ký công_cụ hỗ_trợ , thủ_tục
cấp lại giấy xác_nhận đăng_ký công_cụ hỗ_trợ như thế_nào ?
model-index:
- name: SentenceTransformer based on anhtuansh/gte-multilingual-base-Matryoshka-3e-9k
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5849673202614379
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7440087145969498
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7962962962962963
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8529411764705882
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5849673202614379
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.24800290486565
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15925925925925924
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08529411764705883
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5849673202614379
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7440087145969498
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7962962962962963
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8529411764705882
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7195299486597906
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6768323996265172
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6811278169033975
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5784313725490197
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7440087145969498
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7973856209150327
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8518518518518519
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5784313725490197
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2480029048656499
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15947712418300652
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08518518518518517
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5784313725490197
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7440087145969498
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7973856209150327
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8518518518518519
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7158400533177708
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6722442680776016
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6765528283602505
name: Cosine Map@100
---
# SentenceTransformer based on anhtuansh/gte-multilingual-base-Matryoshka-3e-9k
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [anhtuansh/gte-multilingual-base-Matryoshka-3e-9k](https://huggingface.co/anhtuansh/gte-multilingual-base-Matryoshka-3e-9k) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [anhtuansh/gte-multilingual-base-Matryoshka-3e-9k](https://huggingface.co/anhtuansh/gte-multilingual-base-Matryoshka-3e-9k) <!-- at revision 817b31173e34e178290588270e33690fbb6fb2ee -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("anhtuansh/gte-multilingual-base-Matryoshka-4e-9k")
# Run inference
sentences = [
'quản_lý nhập_khẩu hàng_hóa bị điều_tra , phục_vụ công_tác điều_tra , áp_dụng biện_pháp phòng_vệ thương_mại .',
'mục_đích của thủ_tục khai_báo là gì ?',
'đơn_vị chúng_tôi đã được cấp chứng_chỉ năng_lực hoạt_động xây_dựng nhưng hiện_nay chúng_tôi thay_đổi người đại_diện pháp_luật của đơn_vị . vậy chúng_tôi có phải làm thủ_tục nào để thay_đổi người đại_diện theo pháp_luật của chúng_tôi trên chứng_chỉ ?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768` and `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 |
|:--------------------|:-----------|:-----------|
| cosine_accuracy@1 | 0.585 | 0.5784 |
| cosine_accuracy@3 | 0.744 | 0.744 |
| cosine_accuracy@5 | 0.7963 | 0.7974 |
| cosine_accuracy@10 | 0.8529 | 0.8519 |
| cosine_precision@1 | 0.585 | 0.5784 |
| cosine_precision@3 | 0.248 | 0.248 |
| cosine_precision@5 | 0.1593 | 0.1595 |
| cosine_precision@10 | 0.0853 | 0.0852 |
| cosine_recall@1 | 0.585 | 0.5784 |
| cosine_recall@3 | 0.744 | 0.744 |
| cosine_recall@5 | 0.7963 | 0.7974 |
| cosine_recall@10 | 0.8529 | 0.8519 |
| **cosine_ndcg@10** | **0.7195** | **0.7158** |
| cosine_mrr@10 | 0.6768 | 0.6722 |
| cosine_map@100 | 0.6811 | 0.6766 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 8,259 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 303.9 tokens</li><li>max: 3209 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 56.17 tokens</li><li>max: 578 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>điểm a , mục 2 phần ii thông_tư số 04 / 1999 / tt - bca ( c13 ) ngày 29 / 4 / 1999 của bộ công_an hướng_dẫn một_số quy_định của nghị_định số 05 / 1999 / nđ - cp ngày 03 / 01 / 1999 của chính_phủ về chứng_minh nhân_dân quy_định các trường_hợp phải đổi cmnd như sau : - quá thời_hạn sử_dụng 15 năm kể từ ngày cấp ; - cmnd rách , nát , không rõ ảnh hoặc một trong các thông_tin đã ghi trên cmnd ; - thay_đổi họ , tên , chữ đệm , ngày , tháng , năm sinh . những thay_đổi này phải có quyết_định của cơ_quan có thẩm_quyền ; - những người đã được cấp giấy cmnd nhưng chuyển nơi đăng_ký hộ_khẩu thường_trú ngoài phạm_vi tỉnh , thành_phố trực_thuộc trung_ương . trường_hợp chuyển nơi thường_trú trong phạm_vi tỉnh , thành_phố mà công_dân có yêu_cầu thì được đổi lại cmnd ; - thay_đổi đặc_điểm nhận_dạng là những trường_hợp đã qua phẫu_thuật thẩm_mỹ , chỉnh_hình hoặc vì lý_do khác đã làm thay_đổi hình_ảnh hoặc đặc_điểm nhận_dạng của họ .</code> | <code>công_dân phải làm thủ_tục đổi chứng_minh nhân_dân khi nào ?</code> |
| <code>việc thực_hiện thủ_tục tặng cờ thi_đua cấp bộ , ban , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương theo đợt hoặc chuyên_đề được tiến_hành như sau : <br> bước 1 . vụ , phòng , ban thi_đua – khen_thưởng các bộ , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương tiếp_nhận đề_nghị khen_thưởng của các đơn_vị thực thuộc . <br> bước 2 . thẩm_định hồ_sơ , xin ý_kiến các cơ_quan liên_quan , báo_cáo hội_đồng thi_đua khen_thưởng cùng cấp , tổng_hợp trình bộ_trưởng , thủ_trưởng đơn_vị , chủ_tịch ubnd tỉnh , thành_phố quyết_định khen_thưởng . <br> bước 3 . khi có quyết_định của bộ_trưởng , thủ_trưởng đơn_vị , chủ_tịch ubnd tỉnh , thành_phố trực_thuộc trung_ương ; vụ , phòng , ban thi_đua – khen_thưởng các bộ , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương thông_báo quyết_định , viết bằng , đóng_dấu và cấp_phát cho đơn_vị trình khen . <br> bước 4 . các trường_hợp không được khen_thưởng ( không đúng đối_tượng , không đủ tiêu_chuẩn , không đủ hồ_sơ...</code> | <code>đề_nghị cho biết trình_tự thực_hiện thủ_tục tặng cờ thi_đua cấp bộ , ban , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương theo đợt hoặc chuyên_đề</code> |
| <code>thời_gian phục_vụ tại_ngũ của hạ_sĩ_quan binh_sĩ được quy_định tại điều 21 luật nvqs năm 2015 , cụ_thể như sau : “ điều 21 . thời_hạn phục_vụ tại_ngũ của hạ_sĩ_quan , binh sĩ1 . thời_hạn phục_vụ tại_ngũ trong thời_bình của hạ_sĩ_quan , binh_sĩ là 24 tháng . 2 . bộ_trưởng bộ quốc_phòng được quyết_định kéo_dài thời_hạn phục_vụ tại_ngũ của hạ_sĩ_quan , binh_sĩ nhưng không quá 06 tháng trong trường_hợp sau đây : a ) để bảo_đảm_nhiệm_vụ sẵn_sàng chiến_đấu ; b ) đang thực_hiện nhiệm_vụ phòng , chống thiên_tai , dịch_bệnh , cứu_hộ , cứu nạn . 3 . thời_hạn phục_vụ của hạ_sĩ_quan , binh_sĩ trong tình_trạng chiến_tranh hoặc tình_trạng khẩn_cấp về quốc_phòng được thực_hiện theo lệnh tổng_động_viên hoặc động_viên cục_bộ . ”</code> | <code>quy_định thời_gian phục_vụ tại_ngũ của hạ_sĩ_quan binh_sĩ như thế_nào ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 918 evaluation samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 918 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 307.44 tokens</li><li>max: 3463 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 55.94 tokens</li><li>max: 383 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>theo quy_định tại khoản 9 điều 1 nghị_định số 161 / 2018 / nđ - cpngày 29 / 11 / 2018 của chính_phủ sửa_đổi , bổ_sung một_số quy_định về tuyển_dụng công_chức , viên_chức , nâng ngạch công_chức , thăng_hạng viên_chức và thực_hiện chế_độ hợp_đồng một_số loại công_việc trong cơ_quan hành_chính nhà_nước , đơn_vị sự_nghiệp công_lập thì đối_tượng và điểm ưu_tiên trong thi_tuyển hoặc xét tuyển công_chức : <br> + anh_hùng lực_lượng vũ_trang , anh_hùng lao_động , thương_binh , người hưởng chính_sách như thương_binh , thương_binh loại b : được cộng 7,5 điểm vào kết_quả điểm thi tại vòng 2 ; <br> + người dân_tộc_thiểu_số , sĩ_quan quân_đội , sĩ_quan công_an , quân_nhân chuyên_nghiệp , người làm công_tác cơ_yếu chuyển ngành , con liệt_sĩ , con thương_binh , con bệnh_binh , con của người hưởng chính_sách như thương_binh , con của thương_binh loại b , con của người hoạt_động cách_mạng trước tổng_khởi_nghĩa ( từ ngày 19 / 8 / 1945 trở về trước ) , con_đẻ của người hoạt_động kháng_chiến bị nhiễm chất_độc_hó...</code> | <code>đề_nghị cho tôi được biết đối_tượng được hưởng ưu_tiên trong tuyển_dụng công_chức ?</code> |
| <code>1 . khi phát_hiện tổ_chức , cá_nhân kê_khai hồ_sơ , thông_báo không_trung_thực hoặc vi_phạm_quy_định tại điều 8 nghị_định số 23 / 2019 / nđ - cp , cơ_quan tiếp_nhận thông_báo yêu_cầu tạm dừng hoạt_động triển_lãm bằng văn_bản ( mẫu_số 03 tại phụ_lục ban_hành kèm theo nghị_định số 23 / 2019 / nđ - cp ) . 2 . tổ_chức , cá_nhân phải dừng hoạt_động triển_lãm ngay khi nhận được văn_bản của cơ_quan có thẩm_quyền ; kịp_thời_khắc_phục hậu_quả , đề_xuất phương_án tiếp_tục tổ_chức triển_lãm gửi cơ_quan có thẩm_quyền xem_xét , quyết_định . 3 . kết_quả xem_xét , quyết_định của cơ_quan có thẩm_quyền phải được thể_hiện bằng văn_bản và gửi cho tổ_chức , cá_nhân biết để thực_hiện . thời_hạn gửi văn_bản cho tổ_chức , cá_nhân là 03 ngày , kể từ ngày cơ_quan có thẩm_quyền nhận được văn_bản đề_xuất phương_án tiếp_tục tổ_chức triển_lãm của tổ_chức , cá_nhân .</code> | <code>những lý_do nào khiến hoạt_động triển_lãm bị tạm dừng ?</code> |
| <code>theo quy_định tại khoản 1 điều 33 luật quản_lý , sử_dụng vũ_khí , vật_liệu nổ và công_cụ hỗ_trợ và điểm a khoản 4 điều 3 thông_tư số 16 / 2018 / tt - bca ngày 15 / 5 / 2018 của bộ công_an quy_định chi_tiết thi_hành một_số điều của luật quản_lý , sử_dụng vũ_khí , vật_liệu nổ và công_cụ hỗ_trợ thì thủ_tục đề_nghị cấp giấy_phép vận_chuyển vũ_khí quân_dụng đối_với cơ_quan , tổ_chức ở trung_ương không thuộc phạm_vi quản_lý của bộ quốc_phòng được thực_hiện như sau : a ) hồ_sơ đề_nghị bao_gồm : văn_bản đề_nghị nêu rõ số_lượng , chủng_loại , nguồn_gốc xuất_xứ của vũ_khí cần vận_chuyển ; nơi đi , nơi đến , thời_gian và tuyến đường vận_chuyển ; họ và tên , địa_chỉ của người chịu trách_nhiệm vận_chuyển , người điều_khiển phương_tiện ; biển kiểm_soát của phương_tiện ; giấy giới_thiệu kèm theo bản_sao thẻ căn_cước công_dân , chứng_minh nhân_dân , hộ_chiếu hoặc chứng_minh công_an nhân_dân của người đến liên_hệ ; b ) hồ_sơ lập thành 01 bộ và nộp tại cục cảnh_sát qlhc về ttxh ; c ) trong thời_hạn 05 n...</code> | <code>thủ_tục cấp giấy_phép vận_chuyển vũ_khí quân_dụng đối_với cơ_quan , tổ_chức ở trung_ương không thuộc phạm_vi quản_lý của bộ quốc_phòng thực_hiện như thế_nào ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:---------------:|:----------------------:|:----------------------:|
| 0 | 0 | - | - | 0.7140 | 0.7079 |
| 0.0048 | 10 | 0.0006 | - | - | - |
| 0.0097 | 20 | 0.0029 | - | - | - |
| 0.0145 | 30 | 0.0004 | - | - | - |
| 0.0194 | 40 | 0.0003 | - | - | - |
| 0.0242 | 50 | 0.0001 | - | - | - |
| 0.0291 | 60 | 0.0002 | - | - | - |
| 0.0339 | 70 | 0.1243 | - | - | - |
| 0.0387 | 80 | 0.001 | - | - | - |
| 0.0436 | 90 | 0.0001 | - | - | - |
| 0.0484 | 100 | 0.0002 | - | - | - |
| 0.0533 | 110 | 0.0001 | - | - | - |
| 0.0581 | 120 | 0.0004 | - | - | - |
| 0.0630 | 130 | 0.3342 | - | - | - |
| 0.0678 | 140 | 0.0 | - | - | - |
| 0.0726 | 150 | 0.0002 | - | - | - |
| 0.0775 | 160 | 0.0002 | - | - | - |
| 0.0823 | 170 | 0.0001 | - | - | - |
| 0.0872 | 180 | 0.0003 | - | - | - |
| 0.0920 | 190 | 0.0001 | - | - | - |
| 0.0969 | 200 | 0.0001 | - | - | - |
| 0.1017 | 210 | 0.0 | - | - | - |
| 0.1065 | 220 | 0.0001 | - | - | - |
| 0.1114 | 230 | 0.0 | - | - | - |
| 0.1162 | 240 | 0.0001 | - | - | - |
| 0.1211 | 250 | 0.0 | - | - | - |
| 0.1259 | 260 | 0.0 | - | - | - |
| 0.1308 | 270 | 0.0003 | - | - | - |
| 0.1356 | 280 | 0.0001 | - | - | - |
| 0.1404 | 290 | 0.0 | - | - | - |
| 0.1453 | 300 | 0.0001 | - | - | - |
| 0.1501 | 310 | 0.0 | - | - | - |
| 0.1550 | 320 | 0.0 | - | - | - |
| 0.1598 | 330 | 0.0001 | - | - | - |
| 0.1646 | 340 | 0.0001 | - | - | - |
| 0.1695 | 350 | 0.0 | - | - | - |
| 0.1743 | 360 | 0.0003 | - | - | - |
| 0.1792 | 370 | 0.0 | - | - | - |
| 0.1840 | 380 | 0.0001 | - | - | - |
| 0.1889 | 390 | 0.0001 | - | - | - |
| 0.1937 | 400 | 0.1912 | - | - | - |
| 0.1985 | 410 | 0.0002 | - | - | - |
| 0.2034 | 420 | 0.0001 | - | - | - |
| 0.2082 | 430 | 0.0001 | - | - | - |
| 0.2131 | 440 | 0.0001 | - | - | - |
| 0.2179 | 450 | 0.0 | - | - | - |
| 0.2228 | 460 | 0.0001 | - | - | - |
| 0.2276 | 470 | 0.0002 | - | - | - |
| 0.2324 | 480 | 0.0001 | - | - | - |
| 0.2373 | 490 | 0.0 | - | - | - |
| 0.2421 | 500 | 0.0 | - | - | - |
| 0.2470 | 510 | 0.0 | - | - | - |
| 0.2518 | 520 | 0.0001 | - | - | - |
| 0.2567 | 530 | 0.0001 | - | - | - |
| 0.2615 | 540 | 0.0 | - | - | - |
| 0.2663 | 550 | 0.0 | - | - | - |
| 0.2712 | 560 | 0.0 | - | - | - |
| 0.2760 | 570 | 0.0 | - | - | - |
| 0.2809 | 580 | 0.0 | - | - | - |
| 0.2857 | 590 | 0.0 | - | - | - |
| 0.2906 | 600 | 0.0 | - | - | - |
| 0.2954 | 610 | 0.0 | - | - | - |
| 0.3002 | 620 | 0.0001 | - | - | - |
| 0.3051 | 630 | 0.0001 | - | - | - |
| 0.3099 | 640 | 0.0 | - | - | - |
| 0.3148 | 650 | 0.0 | - | - | - |
| 0.3196 | 660 | 0.0 | - | - | - |
| 0.3245 | 670 | 0.0003 | - | - | - |
| 0.3293 | 680 | 0.0 | - | - | - |
| 0.3341 | 690 | 0.0 | - | - | - |
| 0.3390 | 700 | 0.0 | - | - | - |
| 0.3438 | 710 | 0.0001 | - | - | - |
| 0.3487 | 720 | 0.0 | - | - | - |
| 0.3535 | 730 | 0.0 | - | - | - |
| 0.3584 | 740 | 0.0001 | - | - | - |
| 0.3632 | 750 | 0.0 | - | - | - |
| 0.3680 | 760 | 0.0 | - | - | - |
| 0.3729 | 770 | 0.0 | - | - | - |
| 0.3777 | 780 | 0.0001 | - | - | - |
| 0.3826 | 790 | 0.0 | - | - | - |
| 0.3874 | 800 | 0.0001 | - | - | - |
| 0.3923 | 810 | 0.0001 | - | - | - |
| 0.3971 | 820 | 0.0001 | - | - | - |
| 0.4019 | 830 | 0.0001 | - | - | - |
| 0.4068 | 840 | 0.0 | - | - | - |
| 0.4116 | 850 | 0.0001 | - | - | - |
| 0.4165 | 860 | 0.0001 | - | - | - |
| 0.4213 | 870 | 0.0 | - | - | - |
| 0.4262 | 880 | 0.0001 | - | - | - |
| 0.4310 | 890 | 0.0001 | - | - | - |
| 0.4358 | 900 | 0.0 | - | - | - |
| 0.4407 | 910 | 0.0 | - | - | - |
| 0.4455 | 920 | 0.0001 | - | - | - |
| 0.4504 | 930 | 0.0 | - | - | - |
| 0.4552 | 940 | 0.0 | - | - | - |
| 0.4600 | 950 | 0.0001 | - | - | - |
| 0.4649 | 960 | 0.0 | - | - | - |
| 0.4697 | 970 | 0.0 | - | - | - |
| 0.4746 | 980 | 0.0001 | - | - | - |
| 0.4794 | 990 | 0.0002 | - | - | - |
| 0.4843 | 1000 | 0.0 | - | - | - |
| 0.4891 | 1010 | 0.0002 | - | - | - |
| 0.4939 | 1020 | 0.0 | - | - | - |
| 0.4988 | 1030 | 0.0001 | - | - | - |
| 0.5036 | 1040 | 0.0001 | - | - | - |
| 0.5085 | 1050 | 0.0001 | - | - | - |
| 0.5133 | 1060 | 0.0001 | - | - | - |
| 0.5182 | 1070 | 0.0 | - | - | - |
| 0.5230 | 1080 | 0.0001 | - | - | - |
| 0.5278 | 1090 | 0.0001 | - | - | - |
| 0.5327 | 1100 | 0.0001 | - | - | - |
| 0.5375 | 1110 | 0.0 | - | - | - |
| 0.5424 | 1120 | 0.0001 | - | - | - |
| 0.5472 | 1130 | 0.0 | - | - | - |
| 0.5521 | 1140 | 0.0 | - | - | - |
| 0.5569 | 1150 | 0.0001 | - | - | - |
| 0.5617 | 1160 | 0.0 | - | - | - |
| 0.5666 | 1170 | 0.0001 | - | - | - |
| 0.5714 | 1180 | 0.0001 | - | - | - |
| 0.5763 | 1190 | 0.0002 | - | - | - |
| 0.5811 | 1200 | 0.0001 | - | - | - |
| 0.5860 | 1210 | 0.0004 | - | - | - |
| 0.5908 | 1220 | 0.0001 | - | - | - |
| 0.5956 | 1230 | 0.0 | - | - | - |
| 0.6005 | 1240 | 0.0002 | - | - | - |
| 0.6053 | 1250 | 0.0001 | - | - | - |
| 0.6102 | 1260 | 0.0 | - | - | - |
| 0.6150 | 1270 | 0.0 | - | - | - |
| 0.6199 | 1280 | 0.0001 | - | - | - |
| 0.6247 | 1290 | 0.0 | - | - | - |
| 0.6295 | 1300 | 0.0 | - | - | - |
| 0.6344 | 1310 | 0.0 | - | - | - |
| 0.6392 | 1320 | 0.0001 | - | - | - |
| 0.6441 | 1330 | 0.0001 | - | - | - |
| 0.6489 | 1340 | 0.0001 | - | - | - |
| 0.6538 | 1350 | 0.0 | - | - | - |
| 0.6586 | 1360 | 0.0 | - | - | - |
| 0.6634 | 1370 | 0.0001 | - | - | - |
| 0.6683 | 1380 | 0.0 | - | - | - |
| 0.6731 | 1390 | 0.0001 | - | - | - |
| 0.6780 | 1400 | 0.0 | - | - | - |
| 0.6828 | 1410 | 0.0001 | - | - | - |
| 0.6877 | 1420 | 0.0018 | - | - | - |
| 0.6925 | 1430 | 0.0001 | - | - | - |
| 0.6973 | 1440 | 0.0001 | - | - | - |
| 0.7022 | 1450 | 0.0 | - | - | - |
| 0.7070 | 1460 | 0.0001 | - | - | - |
| 0.7119 | 1470 | 0.0005 | - | - | - |
| 0.7167 | 1480 | 0.0001 | - | - | - |
| 0.7215 | 1490 | 0.0 | - | - | - |
| 0.7264 | 1500 | 0.0 | - | - | - |
| 0.7312 | 1510 | 0.0001 | - | - | - |
| 0.7361 | 1520 | 0.0 | - | - | - |
| 0.7409 | 1530 | 0.0001 | - | - | - |
| 0.7458 | 1540 | 0.0001 | - | - | - |
| 0.7506 | 1550 | 0.0001 | - | - | - |
| 0.7554 | 1560 | 0.0003 | - | - | - |
| 0.7603 | 1570 | 0.0001 | - | - | - |
| 0.7651 | 1580 | 0.0001 | - | - | - |
| 0.7700 | 1590 | 0.0 | - | - | - |
| 0.7748 | 1600 | 0.0003 | - | - | - |
| 0.7797 | 1610 | 0.0003 | - | - | - |
| 0.7845 | 1620 | 0.0 | - | - | - |
| 0.7893 | 1630 | 0.0001 | - | - | - |
| 0.7942 | 1640 | 0.0001 | - | - | - |
| 0.7990 | 1650 | 0.0005 | - | - | - |
| 0.8039 | 1660 | 0.0001 | - | - | - |
| 0.8087 | 1670 | 0.0001 | - | - | - |
| 0.8136 | 1680 | 0.0001 | - | - | - |
| 0.8184 | 1690 | 0.0002 | - | - | - |
| 0.8232 | 1700 | 0.0001 | - | - | - |
| 0.8281 | 1710 | 0.0001 | - | - | - |
| 0.8329 | 1720 | 0.0002 | - | - | - |
| 0.8378 | 1730 | 0.0003 | - | - | - |
| 0.8426 | 1740 | 0.0001 | - | - | - |
| 0.8475 | 1750 | 0.0029 | - | - | - |
| 0.8523 | 1760 | 0.0 | - | - | - |
| 0.8571 | 1770 | 0.0001 | - | - | - |
| 0.8620 | 1780 | 0.0001 | - | - | - |
| 0.8668 | 1790 | 0.0 | - | - | - |
| 0.8717 | 1800 | 0.0003 | - | - | - |
| 0.8765 | 1810 | 0.0 | - | - | - |
| 0.8814 | 1820 | 0.0001 | - | - | - |
| 0.8862 | 1830 | 0.0001 | - | - | - |
| 0.8910 | 1840 | 0.0007 | - | - | - |
| 0.8959 | 1850 | 0.0001 | - | - | - |
| 0.9007 | 1860 | 0.0001 | - | - | - |
| 0.9056 | 1870 | 0.0001 | - | - | - |
| 0.9104 | 1880 | 0.0002 | - | - | - |
| 0.9153 | 1890 | 0.0002 | - | - | - |
| 0.9201 | 1900 | 0.0001 | - | - | - |
| 0.9249 | 1910 | 0.0001 | - | - | - |
| 0.9298 | 1920 | 0.0001 | - | - | - |
| 0.9346 | 1930 | 0.0009 | - | - | - |
| 0.9395 | 1940 | 0.0004 | - | - | - |
| 0.9443 | 1950 | 0.0001 | - | - | - |
| 0.9492 | 1960 | 0.0008 | - | - | - |
| 0.9540 | 1970 | 0.0002 | - | - | - |
| 0.9588 | 1980 | 0.0001 | - | - | - |
| 0.9637 | 1990 | 0.0014 | - | - | - |
| 0.9685 | 2000 | 0.0004 | - | - | - |
| 0.9734 | 2010 | 0.0 | - | - | - |
| 0.9782 | 2020 | 0.0004 | - | - | - |
| 0.9831 | 2030 | 0.0001 | - | - | - |
| 0.9879 | 2040 | 0.0002 | - | - | - |
| 0.9927 | 2050 | 0.0001 | - | - | - |
| 0.9976 | 2060 | 0.0013 | - | - | - |
| **1.0** | **2065** | **-** | **0.0263** | **0.7195** | **0.7158** |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.4.0
- Accelerate: 0.29.3
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on anhtuansh/gte-multilingual-base-Matryoshka-3e-9k
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [anhtuansh/gte-multilingual-base-Matryoshka-3e-9k](https://huggingface.co/anhtuansh/gte-multilingual-base-Matryoshka-3e-9k) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [anhtuansh/gte-multilingual-base-Matryoshka-3e-9k](https://huggingface.co/anhtuansh/gte-multilingual-base-Matryoshka-3e-9k) <!-- at revision 817b31173e34e178290588270e33690fbb6fb2ee -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("anhtuansh/gte-multilingual-base-Matryoshka-4e-9k")
# Run inference
sentences = [
'quản_lý nhập_khẩu hàng_hóa bị điều_tra , phục_vụ công_tác điều_tra , áp_dụng biện_pháp phòng_vệ thương_mại .',
'mục_đích của thủ_tục khai_báo là gì ?',
'đơn_vị chúng_tôi đã được cấp chứng_chỉ năng_lực hoạt_động xây_dựng nhưng hiện_nay chúng_tôi thay_đổi người đại_diện pháp_luật của đơn_vị . vậy chúng_tôi có phải làm thủ_tục nào để thay_đổi người đại_diện theo pháp_luật của chúng_tôi trên chứng_chỉ ?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768` and `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 |
|:--------------------|:-----------|:-----------|
| cosine_accuracy@1 | 0.585 | 0.5784 |
| cosine_accuracy@3 | 0.744 | 0.744 |
| cosine_accuracy@5 | 0.7963 | 0.7974 |
| cosine_accuracy@10 | 0.8529 | 0.8519 |
| cosine_precision@1 | 0.585 | 0.5784 |
| cosine_precision@3 | 0.248 | 0.248 |
| cosine_precision@5 | 0.1593 | 0.1595 |
| cosine_precision@10 | 0.0853 | 0.0852 |
| cosine_recall@1 | 0.585 | 0.5784 |
| cosine_recall@3 | 0.744 | 0.744 |
| cosine_recall@5 | 0.7963 | 0.7974 |
| cosine_recall@10 | 0.8529 | 0.8519 |
| **cosine_ndcg@10** | **0.7195** | **0.7158** |
| cosine_mrr@10 | 0.6768 | 0.6722 |
| cosine_map@100 | 0.6811 | 0.6766 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 8,259 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 303.9 tokens</li><li>max: 3209 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 56.17 tokens</li><li>max: 578 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>điểm a , mục 2 phần ii thông_tư số 04 / 1999 / tt - bca ( c13 ) ngày 29 / 4 / 1999 của bộ công_an hướng_dẫn một_số quy_định của nghị_định số 05 / 1999 / nđ - cp ngày 03 / 01 / 1999 của chính_phủ về chứng_minh nhân_dân quy_định các trường_hợp phải đổi cmnd như sau : - quá thời_hạn sử_dụng 15 năm kể từ ngày cấp ; - cmnd rách , nát , không rõ ảnh hoặc một trong các thông_tin đã ghi trên cmnd ; - thay_đổi họ , tên , chữ đệm , ngày , tháng , năm sinh . những thay_đổi này phải có quyết_định của cơ_quan có thẩm_quyền ; - những người đã được cấp giấy cmnd nhưng chuyển nơi đăng_ký hộ_khẩu thường_trú ngoài phạm_vi tỉnh , thành_phố trực_thuộc trung_ương . trường_hợp chuyển nơi thường_trú trong phạm_vi tỉnh , thành_phố mà công_dân có yêu_cầu thì được đổi lại cmnd ; - thay_đổi đặc_điểm nhận_dạng là những trường_hợp đã qua phẫu_thuật thẩm_mỹ , chỉnh_hình hoặc vì lý_do khác đã làm thay_đổi hình_ảnh hoặc đặc_điểm nhận_dạng của họ .</code> | <code>công_dân phải làm thủ_tục đổi chứng_minh nhân_dân khi nào ?</code> |
| <code>việc thực_hiện thủ_tục tặng cờ thi_đua cấp bộ , ban , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương theo đợt hoặc chuyên_đề được tiến_hành như sau : <br> bước 1 . vụ , phòng , ban thi_đua – khen_thưởng các bộ , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương tiếp_nhận đề_nghị khen_thưởng của các đơn_vị thực thuộc . <br> bước 2 . thẩm_định hồ_sơ , xin ý_kiến các cơ_quan liên_quan , báo_cáo hội_đồng thi_đua khen_thưởng cùng cấp , tổng_hợp trình bộ_trưởng , thủ_trưởng đơn_vị , chủ_tịch ubnd tỉnh , thành_phố quyết_định khen_thưởng . <br> bước 3 . khi có quyết_định của bộ_trưởng , thủ_trưởng đơn_vị , chủ_tịch ubnd tỉnh , thành_phố trực_thuộc trung_ương ; vụ , phòng , ban thi_đua – khen_thưởng các bộ , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương thông_báo quyết_định , viết bằng , đóng_dấu và cấp_phát cho đơn_vị trình khen . <br> bước 4 . các trường_hợp không được khen_thưởng ( không đúng đối_tượng , không đủ tiêu_chuẩn , không đủ hồ_sơ...</code> | <code>đề_nghị cho biết trình_tự thực_hiện thủ_tục tặng cờ thi_đua cấp bộ , ban , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương theo đợt hoặc chuyên_đề</code> |
| <code>thời_gian phục_vụ tại_ngũ của hạ_sĩ_quan binh_sĩ được quy_định tại điều 21 luật nvqs năm 2015 , cụ_thể như sau : “ điều 21 . thời_hạn phục_vụ tại_ngũ của hạ_sĩ_quan , binh sĩ1 . thời_hạn phục_vụ tại_ngũ trong thời_bình của hạ_sĩ_quan , binh_sĩ là 24 tháng . 2 . bộ_trưởng bộ quốc_phòng được quyết_định kéo_dài thời_hạn phục_vụ tại_ngũ của hạ_sĩ_quan , binh_sĩ nhưng không quá 06 tháng trong trường_hợp sau đây : a ) để bảo_đảm_nhiệm_vụ sẵn_sàng chiến_đấu ; b ) đang thực_hiện nhiệm_vụ phòng , chống thiên_tai , dịch_bệnh , cứu_hộ , cứu nạn . 3 . thời_hạn phục_vụ của hạ_sĩ_quan , binh_sĩ trong tình_trạng chiến_tranh hoặc tình_trạng khẩn_cấp về quốc_phòng được thực_hiện theo lệnh tổng_động_viên hoặc động_viên cục_bộ . ”</code> | <code>quy_định thời_gian phục_vụ tại_ngũ của hạ_sĩ_quan binh_sĩ như thế_nào ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 918 evaluation samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 918 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 307.44 tokens</li><li>max: 3463 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 55.94 tokens</li><li>max: 383 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>theo quy_định tại khoản 9 điều 1 nghị_định số 161 / 2018 / nđ - cpngày 29 / 11 / 2018 của chính_phủ sửa_đổi , bổ_sung một_số quy_định về tuyển_dụng công_chức , viên_chức , nâng ngạch công_chức , thăng_hạng viên_chức và thực_hiện chế_độ hợp_đồng một_số loại công_việc trong cơ_quan hành_chính nhà_nước , đơn_vị sự_nghiệp công_lập thì đối_tượng và điểm ưu_tiên trong thi_tuyển hoặc xét tuyển công_chức : <br> + anh_hùng lực_lượng vũ_trang , anh_hùng lao_động , thương_binh , người hưởng chính_sách như thương_binh , thương_binh loại b : được cộng 7,5 điểm vào kết_quả điểm thi tại vòng 2 ; <br> + người dân_tộc_thiểu_số , sĩ_quan quân_đội , sĩ_quan công_an , quân_nhân chuyên_nghiệp , người làm công_tác cơ_yếu chuyển ngành , con liệt_sĩ , con thương_binh , con bệnh_binh , con của người hưởng chính_sách như thương_binh , con của thương_binh loại b , con của người hoạt_động cách_mạng trước tổng_khởi_nghĩa ( từ ngày 19 / 8 / 1945 trở về trước ) , con_đẻ của người hoạt_động kháng_chiến bị nhiễm chất_độc_hó...</code> | <code>đề_nghị cho tôi được biết đối_tượng được hưởng ưu_tiên trong tuyển_dụng công_chức ?</code> |
| <code>1 . khi phát_hiện tổ_chức , cá_nhân kê_khai hồ_sơ , thông_báo không_trung_thực hoặc vi_phạm_quy_định tại điều 8 nghị_định số 23 / 2019 / nđ - cp , cơ_quan tiếp_nhận thông_báo yêu_cầu tạm dừng hoạt_động triển_lãm bằng văn_bản ( mẫu_số 03 tại phụ_lục ban_hành kèm theo nghị_định số 23 / 2019 / nđ - cp ) . 2 . tổ_chức , cá_nhân phải dừng hoạt_động triển_lãm ngay khi nhận được văn_bản của cơ_quan có thẩm_quyền ; kịp_thời_khắc_phục hậu_quả , đề_xuất phương_án tiếp_tục tổ_chức triển_lãm gửi cơ_quan có thẩm_quyền xem_xét , quyết_định . 3 . kết_quả xem_xét , quyết_định của cơ_quan có thẩm_quyền phải được thể_hiện bằng văn_bản và gửi cho tổ_chức , cá_nhân biết để thực_hiện . thời_hạn gửi văn_bản cho tổ_chức , cá_nhân là 03 ngày , kể từ ngày cơ_quan có thẩm_quyền nhận được văn_bản đề_xuất phương_án tiếp_tục tổ_chức triển_lãm của tổ_chức , cá_nhân .</code> | <code>những lý_do nào khiến hoạt_động triển_lãm bị tạm dừng ?</code> |
| <code>theo quy_định tại khoản 1 điều 33 luật quản_lý , sử_dụng vũ_khí , vật_liệu nổ và công_cụ hỗ_trợ và điểm a khoản 4 điều 3 thông_tư số 16 / 2018 / tt - bca ngày 15 / 5 / 2018 của bộ công_an quy_định chi_tiết thi_hành một_số điều của luật quản_lý , sử_dụng vũ_khí , vật_liệu nổ và công_cụ hỗ_trợ thì thủ_tục đề_nghị cấp giấy_phép vận_chuyển vũ_khí quân_dụng đối_với cơ_quan , tổ_chức ở trung_ương không thuộc phạm_vi quản_lý của bộ quốc_phòng được thực_hiện như sau : a ) hồ_sơ đề_nghị bao_gồm : văn_bản đề_nghị nêu rõ số_lượng , chủng_loại , nguồn_gốc xuất_xứ của vũ_khí cần vận_chuyển ; nơi đi , nơi đến , thời_gian và tuyến đường vận_chuyển ; họ và tên , địa_chỉ của người chịu trách_nhiệm vận_chuyển , người điều_khiển phương_tiện ; biển kiểm_soát của phương_tiện ; giấy giới_thiệu kèm theo bản_sao thẻ căn_cước công_dân , chứng_minh nhân_dân , hộ_chiếu hoặc chứng_minh công_an nhân_dân của người đến liên_hệ ; b ) hồ_sơ lập thành 01 bộ và nộp tại cục cảnh_sát qlhc về ttxh ; c ) trong thời_hạn 05 n...</code> | <code>thủ_tục cấp giấy_phép vận_chuyển vũ_khí quân_dụng đối_với cơ_quan , tổ_chức ở trung_ương không thuộc phạm_vi quản_lý của bộ quốc_phòng thực_hiện như thế_nào ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:---------------:|:----------------------:|:----------------------:|
| 0 | 0 | - | - | 0.7140 | 0.7079 |
| 0.0048 | 10 | 0.0006 | - | - | - |
| 0.0097 | 20 | 0.0029 | - | - | - |
| 0.0145 | 30 | 0.0004 | - | - | - |
| 0.0194 | 40 | 0.0003 | - | - | - |
| 0.0242 | 50 | 0.0001 | - | - | - |
| 0.0291 | 60 | 0.0002 | - | - | - |
| 0.0339 | 70 | 0.1243 | - | - | - |
| 0.0387 | 80 | 0.001 | - | - | - |
| 0.0436 | 90 | 0.0001 | - | - | - |
| 0.0484 | 100 | 0.0002 | - | - | - |
| 0.0533 | 110 | 0.0001 | - | - | - |
| 0.0581 | 120 | 0.0004 | - | - | - |
| 0.0630 | 130 | 0.3342 | - | - | - |
| 0.0678 | 140 | 0.0 | - | - | - |
| 0.0726 | 150 | 0.0002 | - | - | - |
| 0.0775 | 160 | 0.0002 | - | - | - |
| 0.0823 | 170 | 0.0001 | - | - | - |
| 0.0872 | 180 | 0.0003 | - | - | - |
| 0.0920 | 190 | 0.0001 | - | - | - |
| 0.0969 | 200 | 0.0001 | - | - | - |
| 0.1017 | 210 | 0.0 | - | - | - |
| 0.1065 | 220 | 0.0001 | - | - | - |
| 0.1114 | 230 | 0.0 | - | - | - |
| 0.1162 | 240 | 0.0001 | - | - | - |
| 0.1211 | 250 | 0.0 | - | - | - |
| 0.1259 | 260 | 0.0 | - | - | - |
| 0.1308 | 270 | 0.0003 | - | - | - |
| 0.1356 | 280 | 0.0001 | - | - | - |
| 0.1404 | 290 | 0.0 | - | - | - |
| 0.1453 | 300 | 0.0001 | - | - | - |
| 0.1501 | 310 | 0.0 | - | - | - |
| 0.1550 | 320 | 0.0 | - | - | - |
| 0.1598 | 330 | 0.0001 | - | - | - |
| 0.1646 | 340 | 0.0001 | - | - | - |
| 0.1695 | 350 | 0.0 | - | - | - |
| 0.1743 | 360 | 0.0003 | - | - | - |
| 0.1792 | 370 | 0.0 | - | - | - |
| 0.1840 | 380 | 0.0001 | - | - | - |
| 0.1889 | 390 | 0.0001 | - | - | - |
| 0.1937 | 400 | 0.1912 | - | - | - |
| 0.1985 | 410 | 0.0002 | - | - | - |
| 0.2034 | 420 | 0.0001 | - | - | - |
| 0.2082 | 430 | 0.0001 | - | - | - |
| 0.2131 | 440 | 0.0001 | - | - | - |
| 0.2179 | 450 | 0.0 | - | - | - |
| 0.2228 | 460 | 0.0001 | - | - | - |
| 0.2276 | 470 | 0.0002 | - | - | - |
| 0.2324 | 480 | 0.0001 | - | - | - |
| 0.2373 | 490 | 0.0 | - | - | - |
| 0.2421 | 500 | 0.0 | - | - | - |
| 0.2470 | 510 | 0.0 | - | - | - |
| 0.2518 | 520 | 0.0001 | - | - | - |
| 0.2567 | 530 | 0.0001 | - | - | - |
| 0.2615 | 540 | 0.0 | - | - | - |
| 0.2663 | 550 | 0.0 | - | - | - |
| 0.2712 | 560 | 0.0 | - | - | - |
| 0.2760 | 570 | 0.0 | - | - | - |
| 0.2809 | 580 | 0.0 | - | - | - |
| 0.2857 | 590 | 0.0 | - | - | - |
| 0.2906 | 600 | 0.0 | - | - | - |
| 0.2954 | 610 | 0.0 | - | - | - |
| 0.3002 | 620 | 0.0001 | - | - | - |
| 0.3051 | 630 | 0.0001 | - | - | - |
| 0.3099 | 640 | 0.0 | - | - | - |
| 0.3148 | 650 | 0.0 | - | - | - |
| 0.3196 | 660 | 0.0 | - | - | - |
| 0.3245 | 670 | 0.0003 | - | - | - |
| 0.3293 | 680 | 0.0 | - | - | - |
| 0.3341 | 690 | 0.0 | - | - | - |
| 0.3390 | 700 | 0.0 | - | - | - |
| 0.3438 | 710 | 0.0001 | - | - | - |
| 0.3487 | 720 | 0.0 | - | - | - |
| 0.3535 | 730 | 0.0 | - | - | - |
| 0.3584 | 740 | 0.0001 | - | - | - |
| 0.3632 | 750 | 0.0 | - | - | - |
| 0.3680 | 760 | 0.0 | - | - | - |
| 0.3729 | 770 | 0.0 | - | - | - |
| 0.3777 | 780 | 0.0001 | - | - | - |
| 0.3826 | 790 | 0.0 | - | - | - |
| 0.3874 | 800 | 0.0001 | - | - | - |
| 0.3923 | 810 | 0.0001 | - | - | - |
| 0.3971 | 820 | 0.0001 | - | - | - |
| 0.4019 | 830 | 0.0001 | - | - | - |
| 0.4068 | 840 | 0.0 | - | - | - |
| 0.4116 | 850 | 0.0001 | - | - | - |
| 0.4165 | 860 | 0.0001 | - | - | - |
| 0.4213 | 870 | 0.0 | - | - | - |
| 0.4262 | 880 | 0.0001 | - | - | - |
| 0.4310 | 890 | 0.0001 | - | - | - |
| 0.4358 | 900 | 0.0 | - | - | - |
| 0.4407 | 910 | 0.0 | - | - | - |
| 0.4455 | 920 | 0.0001 | - | - | - |
| 0.4504 | 930 | 0.0 | - | - | - |
| 0.4552 | 940 | 0.0 | - | - | - |
| 0.4600 | 950 | 0.0001 | - | - | - |
| 0.4649 | 960 | 0.0 | - | - | - |
| 0.4697 | 970 | 0.0 | - | - | - |
| 0.4746 | 980 | 0.0001 | - | - | - |
| 0.4794 | 990 | 0.0002 | - | - | - |
| 0.4843 | 1000 | 0.0 | - | - | - |
| 0.4891 | 1010 | 0.0002 | - | - | - |
| 0.4939 | 1020 | 0.0 | - | - | - |
| 0.4988 | 1030 | 0.0001 | - | - | - |
| 0.5036 | 1040 | 0.0001 | - | - | - |
| 0.5085 | 1050 | 0.0001 | - | - | - |
| 0.5133 | 1060 | 0.0001 | - | - | - |
| 0.5182 | 1070 | 0.0 | - | - | - |
| 0.5230 | 1080 | 0.0001 | - | - | - |
| 0.5278 | 1090 | 0.0001 | - | - | - |
| 0.5327 | 1100 | 0.0001 | - | - | - |
| 0.5375 | 1110 | 0.0 | - | - | - |
| 0.5424 | 1120 | 0.0001 | - | - | - |
| 0.5472 | 1130 | 0.0 | - | - | - |
| 0.5521 | 1140 | 0.0 | - | - | - |
| 0.5569 | 1150 | 0.0001 | - | - | - |
| 0.5617 | 1160 | 0.0 | - | - | - |
| 0.5666 | 1170 | 0.0001 | - | - | - |
| 0.5714 | 1180 | 0.0001 | - | - | - |
| 0.5763 | 1190 | 0.0002 | - | - | - |
| 0.5811 | 1200 | 0.0001 | - | - | - |
| 0.5860 | 1210 | 0.0004 | - | - | - |
| 0.5908 | 1220 | 0.0001 | - | - | - |
| 0.5956 | 1230 | 0.0 | - | - | - |
| 0.6005 | 1240 | 0.0002 | - | - | - |
| 0.6053 | 1250 | 0.0001 | - | - | - |
| 0.6102 | 1260 | 0.0 | - | - | - |
| 0.6150 | 1270 | 0.0 | - | - | - |
| 0.6199 | 1280 | 0.0001 | - | - | - |
| 0.6247 | 1290 | 0.0 | - | - | - |
| 0.6295 | 1300 | 0.0 | - | - | - |
| 0.6344 | 1310 | 0.0 | - | - | - |
| 0.6392 | 1320 | 0.0001 | - | - | - |
| 0.6441 | 1330 | 0.0001 | - | - | - |
| 0.6489 | 1340 | 0.0001 | - | - | - |
| 0.6538 | 1350 | 0.0 | - | - | - |
| 0.6586 | 1360 | 0.0 | - | - | - |
| 0.6634 | 1370 | 0.0001 | - | - | - |
| 0.6683 | 1380 | 0.0 | - | - | - |
| 0.6731 | 1390 | 0.0001 | - | - | - |
| 0.6780 | 1400 | 0.0 | - | - | - |
| 0.6828 | 1410 | 0.0001 | - | - | - |
| 0.6877 | 1420 | 0.0018 | - | - | - |
| 0.6925 | 1430 | 0.0001 | - | - | - |
| 0.6973 | 1440 | 0.0001 | - | - | - |
| 0.7022 | 1450 | 0.0 | - | - | - |
| 0.7070 | 1460 | 0.0001 | - | - | - |
| 0.7119 | 1470 | 0.0005 | - | - | - |
| 0.7167 | 1480 | 0.0001 | - | - | - |
| 0.7215 | 1490 | 0.0 | - | - | - |
| 0.7264 | 1500 | 0.0 | - | - | - |
| 0.7312 | 1510 | 0.0001 | - | - | - |
| 0.7361 | 1520 | 0.0 | - | - | - |
| 0.7409 | 1530 | 0.0001 | - | - | - |
| 0.7458 | 1540 | 0.0001 | - | - | - |
| 0.7506 | 1550 | 0.0001 | - | - | - |
| 0.7554 | 1560 | 0.0003 | - | - | - |
| 0.7603 | 1570 | 0.0001 | - | - | - |
| 0.7651 | 1580 | 0.0001 | - | - | - |
| 0.7700 | 1590 | 0.0 | - | - | - |
| 0.7748 | 1600 | 0.0003 | - | - | - |
| 0.7797 | 1610 | 0.0003 | - | - | - |
| 0.7845 | 1620 | 0.0 | - | - | - |
| 0.7893 | 1630 | 0.0001 | - | - | - |
| 0.7942 | 1640 | 0.0001 | - | - | - |
| 0.7990 | 1650 | 0.0005 | - | - | - |
| 0.8039 | 1660 | 0.0001 | - | - | - |
| 0.8087 | 1670 | 0.0001 | - | - | - |
| 0.8136 | 1680 | 0.0001 | - | - | - |
| 0.8184 | 1690 | 0.0002 | - | - | - |
| 0.8232 | 1700 | 0.0001 | - | - | - |
| 0.8281 | 1710 | 0.0001 | - | - | - |
| 0.8329 | 1720 | 0.0002 | - | - | - |
| 0.8378 | 1730 | 0.0003 | - | - | - |
| 0.8426 | 1740 | 0.0001 | - | - | - |
| 0.8475 | 1750 | 0.0029 | - | - | - |
| 0.8523 | 1760 | 0.0 | - | - | - |
| 0.8571 | 1770 | 0.0001 | - | - | - |
| 0.8620 | 1780 | 0.0001 | - | - | - |
| 0.8668 | 1790 | 0.0 | - | - | - |
| 0.8717 | 1800 | 0.0003 | - | - | - |
| 0.8765 | 1810 | 0.0 | - | - | - |
| 0.8814 | 1820 | 0.0001 | - | - | - |
| 0.8862 | 1830 | 0.0001 | - | - | - |
| 0.8910 | 1840 | 0.0007 | - | - | - |
| 0.8959 | 1850 | 0.0001 | - | - | - |
| 0.9007 | 1860 | 0.0001 | - | - | - |
| 0.9056 | 1870 | 0.0001 | - | - | - |
| 0.9104 | 1880 | 0.0002 | - | - | - |
| 0.9153 | 1890 | 0.0002 | - | - | - |
| 0.9201 | 1900 | 0.0001 | - | - | - |
| 0.9249 | 1910 | 0.0001 | - | - | - |
| 0.9298 | 1920 | 0.0001 | - | - | - |
| 0.9346 | 1930 | 0.0009 | - | - | - |
| 0.9395 | 1940 | 0.0004 | - | - | - |
| 0.9443 | 1950 | 0.0001 | - | - | - |
| 0.9492 | 1960 | 0.0008 | - | - | - |
| 0.9540 | 1970 | 0.0002 | - | - | - |
| 0.9588 | 1980 | 0.0001 | - | - | - |
| 0.9637 | 1990 | 0.0014 | - | - | - |
| 0.9685 | 2000 | 0.0004 | - | - | - |
| 0.9734 | 2010 | 0.0 | - | - | - |
| 0.9782 | 2020 | 0.0004 | - | - | - |
| 0.9831 | 2030 | 0.0001 | - | - | - |
| 0.9879 | 2040 | 0.0002 | - | - | - |
| 0.9927 | 2050 | 0.0001 | - | - | - |
| 0.9976 | 2060 | 0.0013 | - | - | - |
| **1.0** | **2065** | **-** | **0.0263** | **0.7195** | **0.7158** |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.4.0
- Accelerate: 0.29.3
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "anhtuansh/gte-multilingual-base-Matryoshka-3e-9k", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:8259", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "theo quy_định tại điều 35 nghị_định số 201 / 2013 / nđ - cp thì thời_hạn giải_quyết thủ_tục hành_chính về cấp giấy_phép thăm_dò , khai_thác nước dưới đất như sau : 1 . tiếp_nhận và kiểm_tra hồ_sơ : trong thời_hạn mười ( 10 ) ngày làm_việc , kể từ ngày nhận hồ_sơ , cơ_quan tiếp_nhận hồ_sơ có trách_nhiệm xem_xét , kiểm_tra hồ_sơ . trường_hợp hồ_sơ không hợp_lệ , cơ_quan tiếp_nhận hồ_sơ thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép để bổ_sung , hoàn_thiện hồ_sơ theo quy_định . trường_hợp hồ_sơ sau khi đã bổ_sung mà vẫn không đáp_ứng yêu_cầu theo quy_định thì cơ_quan tiếp_nhận hồ_sơ trả lại hồ_sơ và thông_báo rõ lý_do cho tổ_chức , cá_nhân đề_nghị cấp phép . 2 . thẩm_định_đề_án , báo_cáo thăm_dò , khai_thác , sử_dụng tài_nguyên nước , xả nước_thải vào nguồn nước trong hồ_sơ đề_nghị cấp phép ( sau đây gọi chung là đề_án , báo_cáo ) : a ) trong thời_hạn ba_mươi ( 30 ) ngày làm_việc , kể từ ngày nhận đủ hồ_sơ hợp_lệ theo quy_định tại khoản 1 điều này , cơ_quan tiếp_nhận hồ_sơ có trách_nhiệm thẩm_định_đề_án , báo_cáo ; nếu cần_thiết thì kiểm_tra thực_tế hiện_trường , lập hội_đồng thẩm_định_đề_án , báo_cáo . trường_hợp đủ điều_kiện cấp phép , cơ_quan tiếp_nhận hồ_sơ trình cơ_quan có thẩm_quyền cấp giấy_phép ; trường_hợp không đủ điều_kiện để cấp phép thì trả lại hồ_sơ cho tổ_chức , cá_nhân đề_nghị cấp phép và thông_báo lý_do không cấp phép ; b ) trường_hợp phải bổ_sung , chỉnh_sửa để hoàn_thiện đề_án , báo_cáo thì cơ_quan tiếp_nhận hồ_sơ gửi văn_bản thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép nêu rõ những nội_dung cần bổ_sung , hoàn_thiện đề_án , báo_cáo . thời_gian bổ_sung , hoàn_thiện hoặc lập lại đề_án , báo_cáo không tính vào thời_gian thẩm_định_đề_án , báo_cáo . thời_gian thẩm_định sau khi đề_án , báo_cáo được bổ_sung hoàn_chỉnh là hai mươi ( 20 ) ngày làm_việc ; c ) trường_hợp phải lập lại đề_án , báo_cáo , cơ_quan tiếp_nhận hồ_sơ gửi văn_bản thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép nêu rõ những nội_dung đề_án , báo_cáo chưa đạt yêu_cầu , phải làm lại và trả lại hồ_sơ đề_nghị cấp phép . 3 . trả kết_quả giải_quyết hồ_sơ_cấp phéptrong thời_hạn năm ( 05 ) ngày làm_việc , kể từ ngày nhận được giấy_phép của cơ_quan có thẩm_quyền , cơ_quan tiếp_nhận hồ_sơ thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép để thực_hiện nghĩa_vụ tài_chính và nhận giấy_phép .", "sentences": ["ai có thẩm_quyền giải_quyết tố_cáo hành_vi vi_phạm_pháp_luật trong thực_hiện nhiệm_vụ , công_vụ của cán_bộ , công_chức , viên_chức ?", "thời_hạn giải_quyết thủ_tục hành_chính về cấp giấy_phép thăm_dò , giấy_phép khai_thác nước dưới đất ?", "tôi có_thể đăng_ký ngành , nghề kinh_doanh không có trong hệ_thống ngành kinh_tế việt_nam không ?"]}, {"source_sentence": "khoản 2 điều 2 thông_tư 30 quy_định , đoàn khám bệnh , chữa bệnh_nhân_đạo là một nhóm nhân_viên y_tế trong nước , nước_ngoài do cá_nhân , tổ_chức trong nước , nước_ngoài tổ_chức để khám bệnh , chữa bệnh_nhân_đạo cho nhân_dân .", "sentences": ["tàu_thủy lưu_trú du_lịch có tiêu_chí xếp_hạng bắt_buộc không ?", "tôi muốn xin cấp lại sổ thuyền_viên thì cần những hồ_sơ gì ?", "đoàn khám bệnh , chữa bệnh_nhân_đạo là gì ?"]}, {"source_sentence": "thành_phần hồ_sơ thực_hiện tthc sửa_đổi , bổ_sung / cấp lại giấy chứng_nhận lưu_hành tự_do ( cfs ) đối_với hàng_hóa xuất_khẩu thuộc phạm_vi quản_lý của bộ nông_nghiệp và phát_triển nông_thôn ( quy_định tại quyết_định số 1312 / qđ - bnn - qlcl ngày 22 / 4 / 2019 về việc công_bố thủ_tục hành_chính được thay_thế , tthc bị bãi_bỏ lĩnh_vực quản_lý chất_lượng nông_lâm_sản và thủy_sản thuộc phạm_vi chức_năng quản_lý của bộ nông_nghiệp và phát_triển nông_thôn ) : - văn_bản đề_nghị sửa_đổi , bổ_sung / cấp lại cfs của thương_nhân : 01 bản_chính . - các giấy_tờ liên_quan đến việc sửa_đổi , bổ_sung / cấp lại cfs.", "sentences": ["kính gửi cục quản_lý chất_lượng nông_lâm_sản và thuỷ_sản , công_ty tôi đã được quý cục cấp giấy chứng_nhận lưu_hành tự_do cfs , nay tôi muốn sửa_đổi một_số thông_tin trên giấy cfs , vậy hồ_sơ đề_nghị sửa_đổi cần những gì ?", "thời_gian để được cấp quyết_định chỉ_định tổ_chức đánh_giá sự phù_hợp lĩnh_vực sản_phẩm , hàng_hóa_vật_liệu xây_dựng đối_với trường_hợp thay_đổi , bổ_sung phạm_vi , lĩnh_vực được chỉ_định", "hồ_sơ đăng_ký dự_tuyển lao_động của người lao_động gồm những gì ?"]}, {"source_sentence": "hồ_sơ đề_nghị gia_hạn giấy_phép cung_cấp dịch_vụ trò_chơi điện_tử g1 trên mạng bao_gồm : - đơn đề_nghị gia_hạn giấy_phép bao_gồm : tên , địa_chỉ của doanh_nghiệp ; số , nơi cấp , ngày cấp giấy chứng_nhận đăng_ký doanh_nghiệp ; số , ngày cấp , ngày hiệu_lực của giấy_phép đã được cấp ; lý_do đề_nghị gia_hạn giấy_phép ; cam_kết của doanh_nghiệp về các nội_dung kê_khai trong đơn đề_nghị .", "sentences": ["thành_phần hồ_sơ", "nhà đầu_tư phải nộp bao_nhiêu bộ hồ_sơ đề_nghị thủ_tướng chính_phủ cho phép đầu_tư gián_tiếp ra nước_ngoài đối_với các trường_hợp đầu_tư khác quy_định tại điều 9 nghị_định 135 / 2015 / nđ - cp ?", "phần vốn của thành_viên chưa góp vốn trong thời_hạn 90 ngày , kể từ ngày được cấp giấy chứng_nhận đăng_ký doanh_nghiệp trong công_ty tnhh hai thành_viên trở lên được xử_lý như thế_nào ? trường_hợp có người nhận mua lại phần vốn chưa góp này thì công_ty có phải đăng_ký thay_đổi thành_viên với cơ_quan đăng_ký kinh_doanh không ? hồ_sơ , thủ_tục được quy_định như thế_nào ?"]}, {"source_sentence": "quản_lý nhập_khẩu hàng_hóa bị điều_tra , phục_vụ công_tác điều_tra , áp_dụng biện_pháp phòng_vệ thương_mại .", "sentences": ["mục_đích của thủ_tục khai_báo là gì ?", "đơn_vị chúng_tôi đã được cấp chứng_chỉ năng_lực hoạt_động xây_dựng nhưng hiện_nay chúng_tôi thay_đổi người đại_diện pháp_luật của đơn_vị . vậy chúng_tôi có phải làm thủ_tục nào để thay_đổi người đại_diện theo pháp_luật của chúng_tôi trên chứng_chỉ ?", "trường_hợp nào thì được cấp lại giấy xác_nhận đăng_ký công_cụ hỗ_trợ , thủ_tục cấp lại giấy xác_nhận đăng_ký công_cụ hỗ_trợ như thế_nào ?"]}], "model-index": [{"name": "SentenceTransformer based on anhtuansh/gte-multilingual-base-Matryoshka-3e-9k", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5849673202614379, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7440087145969498, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.7962962962962963, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8529411764705882, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5849673202614379, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.24800290486565, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.15925925925925924, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08529411764705883, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5849673202614379, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7440087145969498, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.7962962962962963, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8529411764705882, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7195299486597906, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6768323996265172, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6811278169033975, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5784313725490197, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7440087145969498, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.7973856209150327, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8518518518518519, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5784313725490197, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2480029048656499, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.15947712418300652, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08518518518518517, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5784313725490197, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7440087145969498, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.7973856209150327, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8518518518518519, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7158400533177708, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6722442680776016, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6765528283602505, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,273 |
knowledgator/gliclass-llama-1.3B-v1.0
|
knowledgator
|
zero-shot-classification
|
[
"onnx",
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"license:apache-2.0",
"region:us"
] | 2024-09-06T06:51:38Z |
2024-09-26T14:17:00+00:00
| 39 | 1 |
---
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
language:
- en
license: apache-2.0
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic data and can be used in commercial applications.
This version of the model utilize the [LLM2Vec](https://github.com/McGill-NLP/llm2vec/tree/main/llm2vec) approach for converting modern decoders to bi-directional encoder. It brings the following benefits:
* Enhanced performance and generalization capabilities;
* Support for Flash Attention;
* Extended context window.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-llama-1.3B-v1.0")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-llama-1.3B-v1.0")
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
### Benchmarks:
While the model is some how comparable to DeBERTa version in zero-shot setting, it demonstrates state-of-the-art performance in few-shot setting.

### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG).
| null |
Non_BioNLP
|
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic data and can be used in commercial applications.
This version of the model utilize the [LLM2Vec](https://github.com/McGill-NLP/llm2vec/tree/main/llm2vec) approach for converting modern decoders to bi-directional encoder. It brings the following benefits:
* Enhanced performance and generalization capabilities;
* Support for Flash Attention;
* Extended context window.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-llama-1.3B-v1.0")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-llama-1.3B-v1.0")
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
### Benchmarks:
While the model is some how comparable to DeBERTa version in zero-shot setting, it demonstrates state-of-the-art performance in few-shot setting.

### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG).
|
{"datasets": ["MoritzLaurer/synthetic_zeroshot_mixtral_v0.1"], "language": ["en"], "license": "apache-2.0", "metrics": ["f1"], "pipeline_tag": "zero-shot-classification", "tags": ["text classification", "zero-shot", "small language models", "RAG", "sentiment analysis"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,274 |
jhjlab/distilbert-base-uncased-finetuned-emotion
|
jhjlab
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-15T05:25:56Z |
2023-08-15T05:45:32+00:00
| 8 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9255
name: Accuracy
- type: f1
value: 0.9254982386149765
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2121
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8127 | 1.0 | 250 | 0.3058 | 0.9095 | 0.9088 |
| 0.2419 | 2.0 | 500 | 0.2121 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2121
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8127 | 1.0 | 250 | 0.3058 | 0.9095 | 0.9088 |
| 0.2419 | 2.0 | 500 | 0.2121 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9255, "name": "Accuracy"}, {"type": "f1", "value": 0.9254982386149765, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,275 |
espnet/owls_4B_180K
|
espnet
|
automatic-speech-recognition
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"speech-translation",
"multilingual",
"dataset:owsm_v3.1",
"arxiv:2502.10373",
"license:cc-by-4.0",
"region:us"
] | 2025-02-14T00:30:44Z |
2025-03-10T07:21:58+00:00
| 215 | 4 |
---
datasets:
- owsm_v3.1
language: multilingual
license: cc-by-4.0
metrics:
- wer
- bleu
tags:
- espnet
- audio
- automatic-speech-recognition
- speech-translation
---
## OWLS: Open Whisper-style Large-scale neural model Suite
[Paper](https://arxiv.org/abs/2502.10373)
OWLS is a suite of Whisper-style models, designed to help researchers understand the scaling properties of speech models.
OWLS models range from 0.25B to 18B parameters, and are trained on up to 360K hours of data.
OWLS models are developed using [ESPnet](https://github.com/espnet/espnet), and support multilingual Speech Recognition and Translation.
It is part of the [OWSM](https://www.wavlab.org/activities/2024/owsm/) project, which aims to develop fully open speech foundation models using publicly available data and open-source toolkits.
The model in this repo has 4.66B parameters in total and is trained on 180k hours of public speech data.
Specifically, it supports the following speech-to-text tasks:
- Speech recognition
- Any-to-any-language speech translation
- Utterance-level alignment
- Long-form transcription
- Language identification
## Use this model
You can use this model in your projects with the following code:
```python
# make sure espnet is installed: pip install espnet
from espnet2.bin.s2t_inference import Speech2Text
model = Speech2Text.from_pretrained(
"espnet/owls_4B_180K"
)
speech, rate = soundfile.read("speech.wav")
speech = librosa.resample(speech, orig_sr=rate, target_sr=16000) # make sure 16k sampling rate
text, *_ = model(speech)[0]
```
## Citations
```
@article{chen2025owls,
title={OWLS: Scaling Laws for Multilingual Speech Recognition and Translation Models},
author={Chen, William and Tian, Jinchuan and Peng, Yifan and Yan, Brian and Yang, Chao-Han Huck and Watanabe, Shinji},
journal={arXiv preprint arXiv:2502.10373},
year={2025}
}
```
| null |
Non_BioNLP
|
## OWLS: Open Whisper-style Large-scale neural model Suite
[Paper](https://arxiv.org/abs/2502.10373)
OWLS is a suite of Whisper-style models, designed to help researchers understand the scaling properties of speech models.
OWLS models range from 0.25B to 18B parameters, and are trained on up to 360K hours of data.
OWLS models are developed using [ESPnet](https://github.com/espnet/espnet), and support multilingual Speech Recognition and Translation.
It is part of the [OWSM](https://www.wavlab.org/activities/2024/owsm/) project, which aims to develop fully open speech foundation models using publicly available data and open-source toolkits.
The model in this repo has 4.66B parameters in total and is trained on 180k hours of public speech data.
Specifically, it supports the following speech-to-text tasks:
- Speech recognition
- Any-to-any-language speech translation
- Utterance-level alignment
- Long-form transcription
- Language identification
## Use this model
You can use this model in your projects with the following code:
```python
# make sure espnet is installed: pip install espnet
from espnet2.bin.s2t_inference import Speech2Text
model = Speech2Text.from_pretrained(
"espnet/owls_4B_180K"
)
speech, rate = soundfile.read("speech.wav")
speech = librosa.resample(speech, orig_sr=rate, target_sr=16000) # make sure 16k sampling rate
text, *_ = model(speech)[0]
```
## Citations
```
@article{chen2025owls,
title={OWLS: Scaling Laws for Multilingual Speech Recognition and Translation Models},
author={Chen, William and Tian, Jinchuan and Peng, Yifan and Yan, Brian and Yang, Chao-Han Huck and Watanabe, Shinji},
journal={arXiv preprint arXiv:2502.10373},
year={2025}
}
```
|
{"datasets": ["owsm_v3.1"], "language": "multilingual", "license": "cc-by-4.0", "metrics": ["wer", "bleu"], "tags": ["espnet", "audio", "automatic-speech-recognition", "speech-translation"]}
|
task
|
[
"TRANSLATION"
] | 45,276 |
vubacktracking/mamba_text_classification
|
vubacktracking
| null |
[
"transformers",
"pytorch",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | 2024-04-28T09:30:40Z |
2024-04-29T01:13:06+00:00
| 24 | 1 |
---
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mamba_text_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mamba for Text Classification
This model was trained from scratch on IMDB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1901
- Accuracy: 0.9536
It achieves the following results on the evaluation set:
- Loss: 0.1981
- Accuracy: 0.94
## Model description
Mamba model for text classification
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0205 | 0.1 | 625 | 0.2462 | 0.928 |
| 0.671 | 0.2 | 1250 | 0.1958 | 0.9408 |
| 0.5961 | 0.3 | 1875 | 0.2661 | 0.9344 |
| 0.0167 | 0.4 | 2500 | 0.2171 | 0.9412 |
| 0.0007 | 0.5 | 3125 | 0.2095 | 0.9448 |
| 2.6807 | 0.6 | 3750 | 0.1888 | 0.9492 |
| 0.0155 | 0.7 | 4375 | 0.2249 | 0.95 |
| 0.0021 | 0.8 | 5000 | 0.1991 | 0.9528 |
| 0.0134 | 0.9 | 5625 | 0.1920 | 0.9524 |
| 0.1525 | 1.0 | 6250 | 0.1901 | 0.9536 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mamba for Text Classification
This model was trained from scratch on IMDB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1901
- Accuracy: 0.9536
It achieves the following results on the evaluation set:
- Loss: 0.1981
- Accuracy: 0.94
## Model description
Mamba model for text classification
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0205 | 0.1 | 625 | 0.2462 | 0.928 |
| 0.671 | 0.2 | 1250 | 0.1958 | 0.9408 |
| 0.5961 | 0.3 | 1875 | 0.2661 | 0.9344 |
| 0.0167 | 0.4 | 2500 | 0.2171 | 0.9412 |
| 0.0007 | 0.5 | 3125 | 0.2095 | 0.9448 |
| 2.6807 | 0.6 | 3750 | 0.1888 | 0.9492 |
| 0.0155 | 0.7 | 4375 | 0.2249 | 0.95 |
| 0.0021 | 0.8 | 5000 | 0.1991 | 0.9528 |
| 0.0134 | 0.9 | 5625 | 0.1920 | 0.9524 |
| 0.1525 | 1.0 | 6250 | 0.1901 | 0.9536 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mamba_text_classification", "results": []}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,278 |
Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF
|
Triangle104
| null |
[
"transformers",
"gguf",
"language",
"granite-3.1",
"llama-cpp",
"gguf-my-repo",
"base_model:ibm-granite/granite-3.1-2b-base",
"base_model:quantized:ibm-granite/granite-3.1-2b-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-12-26T11:54:33Z |
2024-12-26T11:55:10+00:00
| 5 | 0 |
---
base_model: ibm-granite/granite-3.1-2b-base
library_name: transformers
license: apache-2.0
tags:
- language
- granite-3.1
- llama-cpp
- gguf-my-repo
---
# Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3.1-2b-base`](https://huggingface.co/ibm-granite/granite-3.1-2b-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.1-2b-base) for more details on the model.
---
Model details:
-
Granite-3.1-2B-Base extends the context length of Granite-3.0-2B-Base
from 4K to 128K using a progressive training strategy by increasing the
supported context length in increments while adjusting RoPE theta until
the model has successfully adapted to desired length of 128K. This
long-context pre-training stage was performed using approximately 500B
tokens.
Developers: Granite Team, IBM
GitHub Repository: ibm-granite/granite-3.1-language-models
Website: Granite Docs
Paper: Granite 3.1 Language Models (coming soon)
Release Date: December 18th, 2024
License: Apache 2.0
Supported Languages:
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech,
Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1
models for languages beyond these 12 languages.
Intended Use:
Prominent use cases of LLMs in text-to-text generation include
summarization, text classification, extraction, question-answering, and
other long-context tasks. All Granite Base models are able to handle
these tasks as they were trained on a large amount of data from various
domains. Moreover, they can serve as baseline to create specialized
models for specific application scenarios.
Generation:
This is a simple example of how to use Granite-3.1-2B-Base model.
Install the following libraries:
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
Then, copy the code snippet below to run the example.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "auto"
model_path = "ibm-granite/granite-3.1-2b-base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "Where is the Thomas J. Watson Research Center located?"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens,
max_length=4000)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)
Model Architecture:
Granite-3.1-2B-Base is based on a decoder-only dense transformer
architecture. Core components of this architecture are: GQA and RoPE,
MLP with SwiGLU, RMSNorm, and shared input/output embeddings.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF --hf-file granite-3.1-2b-base-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF --hf-file granite-3.1-2b-base-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF --hf-file granite-3.1-2b-base-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF --hf-file granite-3.1-2b-base-q5_k_m.gguf -c 2048
```
| null |
Non_BioNLP
|
# Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3.1-2b-base`](https://huggingface.co/ibm-granite/granite-3.1-2b-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.1-2b-base) for more details on the model.
---
Model details:
-
Granite-3.1-2B-Base extends the context length of Granite-3.0-2B-Base
from 4K to 128K using a progressive training strategy by increasing the
supported context length in increments while adjusting RoPE theta until
the model has successfully adapted to desired length of 128K. This
long-context pre-training stage was performed using approximately 500B
tokens.
Developers: Granite Team, IBM
GitHub Repository: ibm-granite/granite-3.1-language-models
Website: Granite Docs
Paper: Granite 3.1 Language Models (coming soon)
Release Date: December 18th, 2024
License: Apache 2.0
Supported Languages:
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech,
Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1
models for languages beyond these 12 languages.
Intended Use:
Prominent use cases of LLMs in text-to-text generation include
summarization, text classification, extraction, question-answering, and
other long-context tasks. All Granite Base models are able to handle
these tasks as they were trained on a large amount of data from various
domains. Moreover, they can serve as baseline to create specialized
models for specific application scenarios.
Generation:
This is a simple example of how to use Granite-3.1-2B-Base model.
Install the following libraries:
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
Then, copy the code snippet below to run the example.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "auto"
model_path = "ibm-granite/granite-3.1-2b-base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "Where is the Thomas J. Watson Research Center located?"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens,
max_length=4000)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)
Model Architecture:
Granite-3.1-2B-Base is based on a decoder-only dense transformer
architecture. Core components of this architecture are: GQA and RoPE,
MLP with SwiGLU, RMSNorm, and shared input/output embeddings.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF --hf-file granite-3.1-2b-base-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF --hf-file granite-3.1-2b-base-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF --hf-file granite-3.1-2b-base-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/granite-3.1-2b-base-Q5_K_M-GGUF --hf-file granite-3.1-2b-base-q5_k_m.gguf -c 2048
```
|
{"base_model": "ibm-granite/granite-3.1-2b-base", "library_name": "transformers", "license": "apache-2.0", "tags": ["language", "granite-3.1", "llama-cpp", "gguf-my-repo"]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | 45,279 |
softcatala/translate-jpn-cat
|
softcatala
|
translation
|
[
"opennmt",
"translation",
"ca",
"jp",
"license:mit",
"region:us"
] | 2024-08-20T08:20:03Z |
2024-08-20T08:34:43+00:00
| 9 | 2 |
---
language:
- ca
- jp
library_name: opennmt
license: mit
metrics:
- bleu
tags:
- translation
inference: false
---
### Introduction
Japanese - Catalan translation model based on OpenNMT. These are the same models that we have in production at https://www.softcatala.org/traductor/.
### Usage
```bash
pip3 install ctranslate2 pyonmttok
```
Simple translation using Python:
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="softcatala/translate-jpn-cat", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/sp_m.model")
tokenized=tokenizer.tokenize("こんにちは")
import ctranslate2
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
Hola món!
```
## Benchmarks
| testset | BLEU |
|---------------------------------------|-------|
| test dataset (from train/dev/test) | 24.9 |
| Flores200 dataset | 17.8 |
## Additional information
* https://github.com/Softcatala/nmt-models
* https://github.com/Softcatala/parallel-catalan-corpus
| null |
Non_BioNLP
|
### Introduction
Japanese - Catalan translation model based on OpenNMT. These are the same models that we have in production at https://www.softcatala.org/traductor/.
### Usage
```bash
pip3 install ctranslate2 pyonmttok
```
Simple translation using Python:
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="softcatala/translate-jpn-cat", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/sp_m.model")
tokenized=tokenizer.tokenize("こんにちは")
import ctranslate2
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
Hola món!
```
## Benchmarks
| testset | BLEU |
|---------------------------------------|-------|
| test dataset (from train/dev/test) | 24.9 |
| Flores200 dataset | 17.8 |
## Additional information
* https://github.com/Softcatala/nmt-models
* https://github.com/Softcatala/parallel-catalan-corpus
|
{"language": ["ca", "jp"], "library_name": "opennmt", "license": "mit", "metrics": ["bleu"], "tags": ["translation"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 45,280 |
research-dump/bge-base-en-v1.5_wikinews_masked_wikinews_masked
|
research-dump
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"region:us"
] | 2025-02-07T05:30:10Z |
2025-02-07T05:39:48+00:00
| 6 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: Portal:Georgia: Portal:Georgia
It seems the day of the disambiguation portal has been and gone. No further use
as we can and do use mainspace disambig pages now. BRS (Talk) (Contribs) 13:57,
4 May 2014 (UTC) [ reply ] Comments Votes Remove Somehow we''ve recently fallen
out of the very-long-standing habit of putting these deletion nums under sections
by date. I suspect this is a matter of not having a tool that helps with the
technical operation of adding noms (I hope eventually to have such assistance
for creating, closing, and archiving various kinds of noms, keeping in mind there
are technical idiosyncrasies to particular kinds of noms). -- Pi zero ( talk
) 14:29, 4 May 2014 (UTC) [ reply ] [MASK] There''s a framework in place for migrating
to subpages (which would semi-automate all the opening and closing and archiving
and {{ votings }} and things), but that''s something for the talk page. Date sections
make less sense when there''s very few deletion proposals each week. Microchip08
( talk ) 07:49, 6 May 2014 (UTC) [ reply ] They''re handy for keeping track of
when noms took place, which is useful all around. And taking it all to the talk
page seems needlessly bureaucratic given, as you say, the low traffic involved. --
Pi zero ( talk ) 12:40, 6 May 2014 (UTC) [ reply ] [MASK] ; it does no harm, especially
as the mainspace Georgia disambiguates between categories and not these portals.
This only really needs to go if one of the Georgia portals gets deleted; at the
moment it''s serving a useful purpose. Microchip08 ( talk ) 07:49, 6 May 2014
(UTC) [ reply ] [MASK] Portals as an entire class of pages on en.wn are deprecated
in favor of categories. -- Pi zero ( talk ) 12:40, 6 May 2014 (UTC) [ reply ]
The above discussion is preserved as an archive. Please do not modify it. Subsequent
comments should be made on the appropriate discussion page (such as this page''s
talk page , admin action alerts or the talk page of the nominated article). No
further edits should be made to this page. ###Output: '
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: User:Azhar sabri: I''m not
sure why this DR was opened as there was a speedy deletion request on the page,
which I''ve now actioned. BRS (Talk) (Contribs) 16:40, 13 April 2014 (UTC) [ reply
] User:Azhar sabri In my opinion, this seems like inappropriate use of the userspace
based on my observations found here and here where there seemed to be a widespread
"spam" attack of sorts. I''ve tried to warn the user to stick to the missions
of English Wikinews, but I''m not sure if it''s an appropriate attempt at communication
or not given the circumstances, so I would like the community here to review that
too. TeleComNasSprVen ( talk ) 20:20, 6 March 2014 (UTC) [ reply ] Votes The above
discussion is preserved as an archive. Please do not modify it. Subsequent comments
should be made on the appropriate discussion page (such as this page''s talk page
, admin action alerts or the talk page of the nominated article). No further edits
should be made to this page. ###Output: '
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: Interviews: Interviews WN:Story
preparation/Office hours:Frank Schulenberg Last edited in December 2009. Highly
unlikely to be published now. WN:Story preparation/Wikinews Interviews: Jim McGreevey
Last edited in September 2007. Highly unlikely to be published now. These have
been put together because they are unpublished interviews from several years ago.
-- Green Giant ( talk ) 17:54, 16 April 2020 (UTC) [ reply ] Comments Votes The
above discussion is preserved as an archive. Please do not modify it. Subsequent
comments should be made on the appropriate discussion page (such as this page''s
talk page , admin action alerts or the talk page of the nominated article). No
further edits should be made to this page. ###Output: '
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: AZOperator: Files uploaded
by AZOperator ( t · c · b ) for use in US: Judge enters ''not guilty'' for Florida
school shooting suspect, audio/video forced release with FBI timeline File:Nikolas
Cruz campus movements.png File:Suspect Movement in Building 12.png It is my opinion,
these images do not qualify for a fair use exception with regards to reusing copyrighted
images. The uploader has stated they were found on Commons , however, if they
were there, I cannot locate them now. The aspect of the story they are used to
illustrate is by no means ''breaking news'' furthermore it would be easy to create
free versions by anyone with a graphics program, a free satellite photo, and a
few hours of spare time. As best as I can determine, the images are the work of
the Sun-Sentinel as seen in this article . Thank you for your time, -- SVT Cobra
19:26, 17 March 2018 (UTC) [ reply ] Comments Hello SVTCobra. "a few hours of
spare time" is a wonderful description of one of the resources that journalistic
work requires. :-) Concur that the images may be reproduced by hand to convey
the same information (or even more) in a free way. -- Gryllida ( talk ) 02:41,
19 March 2018 (UTC) [ reply ] Votes Remove as argued. Article works well without
these images and shouldn''t contain non-free images. -- Gwyndon ( talk ) 02:33,
19 March 2018 (UTC) [ reply ] Remove per my own nomination. -- SVT Cobra 14:09,
22 March 2018 (UTC) [ reply ] Remove per nomination and because the article has
been deleted. Green Giant ( talk ) 22:05, 10 April 2018 (UTC) [ reply ] Remove
as per nominator. The article does not exist. Harsh Rathod Poke me! 04:46, 25
June 2018 (UTC) [ reply ] The above discussion is preserved as an archive. Please
do not modify it. Subsequent comments should be made on the appropriate discussion
page (such as this page''s talk page , admin action alerts or the talk page of
the nominated article). No further edits should be made to this page. ###Output: '
- text: '###Instruction: Multi-class classification, answer with one of the labels:
[delete, keep, speedy delete, comment] : ###Input: Userspace drafts: Userspace
drafts I’ve come across a large number of draft articles in userspace. Most seem
to be very stale but do not fit into a speedy deletion criterion. I’m listing
them here in batches for community scrutiny. The first batch are all in anonymous
IP userspace: User:194.19.60.4/Norwegian Broadcasting (P3) broadcast story about
Muhammed as pig Last edit was February 2006. User:194.247.230.196/Wikipedia is
islamophobic Last edit was March 2006. User:202.69.163.8/Philippines Last edit
was March 2008. User:210.211.194.223/HAS HUMANITY A FUTURE Last edit was February
2006. User:41.204.73.49/Cameroon 2011 presidential election Last edit was October
2011. User:66.65.46.8/Reaction to Danish Cartoons Last edit was February 2006.
Thank you for reading this far. -- Green Giant ( talk ) 18:14, 30 July 2021 (UTC)
[ reply ] Comments Votes [MASK] - as nominator. -- Green Giant ( talk ) 18:14,
30 July 2021 (UTC) [ reply ] No objections in two weeks. The result was to [MASK]
these pages. -- Cromium ( talk ) 01:26, 13 August 2021 (UTC) [ reply ] The above
discussion is preserved as an archive . Please do not modify it. Subsequent comments
should be made on the appropriate discussion page, such as the current discussion
page. No further edits should be made to this discussion. ###Output: '
inference: true
---
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Category:Trucks: Category:Trucks [ edit ] Ehhh, long time, no use of DR. I don\'t expect killing an encyclopedic category like this to be a difficult decision. -- Brian McNeil / talk 19:41, 3 October 2012 (UTC) [ reply ] Comments [ edit ] Just for the truckers in this DR . -- Brian McNeil / talk 22:20, 3 October 2012 (UTC) [ reply ] I\'m \'bundling\' Portal:Trucks/Wikipedia with this, despite the fact that it\'s been set up in a bizarre way that doesn\'t even use the category (i.e. suggests a cut\'n\'paste by someone with no knowledge of DPL). -- Brian McNeil / talk 22:34, 3 October 2012 (UTC) [ reply ] I\'ve no objection to the bundling. Agree it suggests dearth of DPL knowledge. -- Pi zero ( talk ) 22:59, 3 October 2012 (UTC) [ reply ] Hello I creator portal\xa0; w:Portal:Trucks an french category fr:Catégorie:Camion . FrankyLeRoutier ( talk ) 15:57, 11 October 2012 (UTC) [ reply ] Votes [ edit ] [MASK]! [MASK]! [MASK]! 100% encyclopedic category. Who ever saw a "Truck news" section in a paper — other than one that comes in a plain brown wrapper. :P -- Brian McNeil / talk 19:41, 3 October 2012 (UTC) [ reply ] Remove Yup. Encyclopedic through and through. -- Pi zero ( talk ) 20:05, 3 October 2012 (UTC) [ reply ] Remove Confusing. -- LauraHale ( talk ) 03:59, 23 October 2012 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Wikinews:Story preparation/North Korean leader and dictator Kim Jong Il dies at age XX: Wikinews:Story preparation/North Korean leader and dictator Kim Jong Il dies at age XX [ edit ] He died in December 2011, a story that Wikinews didn\'t cover despite having this (unreferenced and clearly forgotten) draft sitting around. As the window of opportunity for this story has been missed, there\'s no point in keeping it around. Bencherlite ( talk ) 12:27, 2 July 2012 (UTC) [ reply ] Comments [ edit ] [MASK] Actually we did cover his death... just not using the prepped article. See North Korean leader Kim Jong-il dead . -- Patrick M (TUFKAAP) ( talk ) 15:40, 5 July 2012 (UTC) [ reply ] Shame that nobody added that article to Category:Kim Jong-il , which is what I checked. Would someone care to do the honours to save me adding a protected edit request? The fact that the death was covered without using the "prepared" story strengthens my view about the general uselessness of such items. Bencherlite ( talk ) 11:28, 11 July 2012 (UTC) [ reply ] Votes [ edit ] Please vote using [MASK] , Remove , Neutral , or [MASK] followed by signature speedy candidate -- Brian McNeil / talk 14:45, 2 July 2012 (UTC) [ reply ] speedy candidate Ooops to missing the story but clearly no longer relevant. -- LauraHale ( talk ) 22:16, 2 July 2012 (UTC) [ reply ] Remove Unsourced drafts are reasonably useless, and trying to compete with the mainstream for big events is largely pointless with the real strength being the wealth of under-reported events that can be reported better. As such, I\'d have been inclined to [MASK] this even if the author(s) hadn\'t forgotten about it and allowed it to stale . Blood Red Sandman (Talk) (Contribs) 17:21, 3 July 2012 (UTC) [ reply ] How much of Category:Prepared stories do you feel like nuking? Most of the stories in there appear to be unsourced drafts, or useless, or both e.g. Wikinews:Story preparation/(Island country) evacuated, vanishes beneath sea , Wikinews:Story preparation/Former French President Jacques Chirac dies , Wikinews:Story preparation/Wikimedia founder Jimmy Wales dies aged XXX . I\'ll tag a few others as "abandoned" and see what happens. Bencherlite ( talk ) 10:49, 4 July 2012 (UTC) [ reply ] We\'ve been mulling over, for some time now, housecleaning the story-preparation section. On the only modern occasions I recall where prepared story templates were actually used, they prevented us from publishing because they had been prepared without sourcing. We may want to consider overhauling the whole concept of the section to make it compatible with review (perhaps with something similar to the "script review" concept we\'ve been using for Crtew\'s weekly audio briefs). -- Pi zero ( talk ) 12:35, 4 July 2012 (UTC) [ reply ] I\'ve tagged over thirty "story preparation" pages as {{ abandoned }}, some dating back over five years. Most of the pre-"prepared" obituaries could be deleted without fuss if we had a new proposed deletion criterion that said something like "5. Prepared stories for future events (including a person\'s death) with inadequate references for existing facts". Thoughts? Bencherlite ( talk ) 14:00, 4 July 2012 (UTC) [ reply ] I could quite happily go with a PD criteria on unsourced prepwork. Probably need an SD on "prepared, unused, event now past". -- Brian McNeil / talk 14:10, 4 July 2012 (UTC) [ reply ] Yes to PD and SD. -- Pi zero ( talk ) 17:07, 4 July 2012 (UTC) [ reply ] I\'m uncomfortable frying long-untouched interview ideas (and, by extension, possible features/investigations) as {{ abandoned }}; two days\' notice seems a tad cruel on things that may or may not be revived. I have just fryed one that was my own from god knows how long ago. Events that have passed seem suitable, and a speedy option seems sensible (I\'ve nommed a pile of such before and they were deleted without controversy). I\'ll make exact suggestions for WN:PROD and WN:SD additions in a minute on the Water Cooler; in answer to the question, I\'d prune a lot of the prepped stuff. Blood Red Sandman (Talk) (Contribs) 19:28, 4 July 2012 (UTC) [ reply ] Remove Prepped stories were good in the pre flagged revs days. Not so much now. Granted they could be useful again as long as they\'re at least backed some sourced information. -- Patrick M (TUFKAAP) ( talk ) 15:36, 5 July 2012 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Template:Citequote: Template:Citequote [ edit ] This is a Wikipedia-style template that doe not fit our workflow, and its existence can only serve to occasionally confuse some newcomer. -- Pi zero ( talk ) Remove as nominator. -- Pi zero ( talk ) 11:37, 25 April 2012 (UTC) [ reply ] Remove Totally Wikipedia-ish. Why did it even end up here? Diego Grez ( talk ) 22:14, 26 April 2012 (UTC) [ reply ] Remove Not something we really have a need for. Blood Red Sandman (Talk) (Contribs) 11:26, 29 April 2012 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page's talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: "</li></ul> |
| 2 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Template:WhatAmerica?: However, there clearly is a consensus that if we are to have this template, we aren\'t to use it in its present form. . Template:WhatAmerica? [ edit ] Bad taste? Probably. This was meant as a rather pointed joke and, seemingly, didn\'t come across that way. However , this is a recurring problem in that United States citizens self-identify as "Americans". With Wikinews intended for an international audience, the point being made rather bluntly by this template needs taken on-board by contributors within the United States. -- Brian McNeil / talk 02:26, 4 March 2012 (UTC) [ reply ] Remove Do [MASK] the template on the grounds of taste as mentioned by the nominator. I\'m all for a separate discussion on this point as a matter of style. Where do we start that? Crtew ( talk ) 02:47, 4 March 2012 (UTC) [ reply ] [MASK] About this template: it is opinion disguised as editing, it is a misrepresentation of Wikinews’ style guide (which may be further grounds for [MASK]), and it is condescending in tone, visual style and substance -- Crtew ( talk ) 16:46, 4 March 2012 (UTC) [ reply ] [MASK] About the term American : One can at the same time hold an international mindset, refer to a people by what they call themselves, and self-identify as an "American" without contradiction. The premise of the template and the self-nomination above is that it wrong to "self-identify" as an American and by doing so a writer doesn\'t have the "proper" mindset. Self-identification : Just because U.S. citizens call themselves "Americans" doesn\'t mean they don\'t recognize that others in the western hemisphere also use the term. That\'s not an issue. The template does not show any kind of understanding for the meaning of the term as it is used in the culture. Self-determination is also a concept that should also be reflected in policy. For example, most would never think of calling Tibetans "Chinese" out of respect for their rights. Like it or not, people who live in the United States use the term "America" and call themselves "Americans," and that\'s not going to change anytime soon. America is even part of the country\'s name (U.S.A.). The term USians , which has been used frequently at Wikinews, is an imaginary term and nobody inside the United States or around the world uses it. Is it the proper use of Wikinews to invent a term and impose it on an entire people? Is Wikinews displaying a counter-bias by raising concerns over the term "American"? Furthermore, I\'ve also seen the term Amerika used at Wikinews. This can be derogatory depending on its source, target and usage, and it similarly should never appear in editing comments. I\'ll raise this in the style discussion later, but if you scan the Wikinews style guide , you will find the term "American" several times. Either editors should change the Wikinews style guide or refrain from bringing their own issues into critiques. -- Crtew ( talk ) 16:46, 4 March 2012 (UTC) [ reply ] [MASK] I\'ve several comments. We are an international publication and, verily, shouldn\'t insult the rest of the supercontinent by calling the US "America". There\'s more to it than that, a whole horribly unhealthy US-centrist mindset. I note that BRS\'s criticism of the article was not limited to the use of the word, but was in fact about omission of significant facts merely because they weren\'t about USians — learning to avoid such bias is part of learning to be a Wikinewsie, and learning not to call USians "Americans" is part of learning to avoid such bias. The template is therefore slightly off-target as it focuses solely on the word usage, while what is likely needed is a template addressing a somewhat wider point; it would be advisable to succinctly mention the word along the way. Since the template is single-author, and "author request" is a speedy-deletion criterion, and the author is an admin, the author in this case can simply [MASK] it. -- Pi zero ( talk ) 12:34, 4 March 2012 (UTC) [ reply ] [MASK] Crtew\'s argument is difficult to follow. So far as I can determine, xyr argument is that it\'s okay for him to use something that some people find insulting becuase other people do it, but if anyone dare respond, that is condescending and unacceptible. I find it difficult to believe someone so clearly intelligent could make such a fallacious argument; so, what am I missing? (It\'s probably something quite obvious, being me.). I also find the concept that Wikinews invented the informal term \'USian\' to be hilarious, as equally do I find the notion that one should not use a(n instantly recognisable) German word in an edit summary. (It is also used in Czech, Danish, Dutch, Hungarian, Lithuanian, Swahili, Turkish, and a few others according to Wiktionary. It is also the title of a song poking mild fun at symbols of US culture.) I find it extraordinary that using an alternative based on the name of a country is \'inflicting\' but using the name of two continents to mean a country is not. I also believe, but do not know, that USian originated amongst USians. Too unknown for mainspace, but instantly udnerstandable outside of it. Equally, the template as-written is more inflamatory than useful and needs modified (and shrunk. It\'s huge.). That is not in itself a reason to [MASK] but it is an issue that prevents it being used as-is. Much though I did smirk at the wording, it isn\'t going to have the desired effect. Where is this meant to be used? On contributors talk pages? That needs figured out. What nobody here seems to be discussing is that the use of \'America\' is a relatively minor issue. It\'s an annoyance and a pet hate, and I do point it out when I see it, but it\'s just a question of finding something less ambiguous. I wouldn\'t use it to mean the American nations collectively, either; same reason. Ambiguity. It\'s a simple case of word-replacing and any templated message should be understated to reflect that. Accordingly, musch downsizing required. Another thing nobody has mentioned is that we all use \'America\' to mean the US. We do. I do. Often without realising it. It\'s usually pretty clear and (apart from maybe a mild dig) I don\'t mind it in discussions etc. Mainspace should strive to be absolutely clear and the easiest thing to do is avoid the word entirely. If we are to take the recent article as an example, there were a lot of other nationalities involved. If one of them was, say, Brazilian then we\'d have real confusion on our hands. Given how ignored the other nationalities were, such a situation wouldn\'t surprise me. We must always remember that the the US anexing of the word \'America\' to mean a single nation is not malicious, and neither are individual uses. It\'s not the best inspiration to cite, that article; it had much more serious problems when I reviewed. I would usually replace \'America\' when editing; if that\'s the only problem with an article, it will tend to get passed. There\'s also a bit of ignorance going on here, which is much more easy to sort out. (We are all ignorant about many things, and WN has a steep learning curve, so there\'s nothing wrong with this. I always hesitate around using \'ignorance\' because of the negative connotations; I\'m not saying it as a put-down.) The only namespace that matters as far as careful phrasing goes is mainspace. When a sense of humour is totally banned I will replace the Main Page with porn, leave, and never return. If we\'re going down the self-identification route, I note there are Amerikans here who self-identify as USians. I also cannot think of any other ambiguous self-identification. The evolution of the word\'s use is, frankly, unfortunate. This is too ungainly for a DR. We should have this on the water cooler, and return here only if we can\'t agree a way to sort this template. Blood Red Sandman (Talk) (Contribs) 17:45, 4 March 2012 (UTC) [ reply ] [MASK] with five minutes work, I\'ve traced use of "USian" as far back as 1987 — the year I first got Internet access. Invented? By-and-large all words are; but this one most-certainly is not a Wikinews invention since this project didn\'t come into existence for another seventeen years. Moving forward a little under two years, I\'m going to excerpt from a discussion that precisely sums up the problems of those outside the United States with attitudes emanating from within the country: [...]\n> > > As for Henry, as a non-citizen\n> > Henry is a citizen of Canada. I am a citizen of Australia.\n> > I fail to see what\n> > the relevence is. The net is not a U.S. dominion.\n> You picked out an insignificant part of my entire paragraph, quoted it\n> out of context, and blew it up far beyond anything I actually said.\nI\'m sorry if you\'re upset, but I\'m tired of the USAcentric attitude that\npervades the net. As an Australian living in, and materially contributing\nto, the United States I\'m exquisitely aware of the casual parochialism\nof many USians. Casual prejudice is prejudice still.\nI\'m also tired of people bashing Henry, or supporting the people who bash\nhim by taking their position. He\'s one of the clearest heads on the net, for\nall his faults. And he has them.\nBut if you must criticise, there are far more valid places to start from\nthan the irrelevant point of his nationality. You\'re just reinforcing\nthe USian attitudes towards foreigners... attitudes that harm the USA, by\nthe way, as much if not more than the foreigners in question.\nIt\'s insignificant to you, but not to me. The fact that it\'s insignificant\nto you is itself part of the problem. Oh well, you probably never thought\nof it as a problem. I trust you will now.\nCan we put this behind us now? — Peter da Silva April 3, 1989. I\'ve quoted this in it\'s entirety, because I\'ve participated in, and seen, this self-same discussion played-out for pretty much the entire 20+ years since Peter\'s complaint as above. It is a deep-seated point of irritation that originates within the United States. The 1987 use? By an \'American\'. An \'American\' somewhat uncomfortable with the claiming of citizenship of the supercontinent. This, perhaps is the crux of the matter. You are saying that Mexicans, Canadians, Columbians, etc., etc. are not \'American citizens\'; they most certainly are! What they are not, is Citizens of the United States. As to the Style Guide? "American" is used frequently as the generally-accepted name of what is " American English " (less-so for "American" date formats), and that\'s somewhere we do not want to have people from this side of the Atlantic taking the discussion and giving their frank and forthright views on the topic. Sorry for the length of this. To call \'throwaway\', imprecise, use of America and/or American Cultural Imperialism is likely putting the case in-extremis; but, reading that Wikipedia article after the above might make it more clear why a great number of uses of \'America[n]\' are close to, if not an outright breach of WN:NPOV . -- Brian McNeil / talk 06:32, 5 March 2012 (UTC) [ reply ] [MASK] More on USian : Alright, I acknowledge I was inaccurate above and that the term "USian" has apparently been around for many years. My intended point, though, was that hardly anybody uses it. My Google search brought up "Urban Dictionary," which is not a reliable source, and others that had the same problem. The Wiktionary entry designates "USian" as "rare". Personally, I have heard exactly zero people use this word outside of Wikinews. The use of a term that is out of the mainstream is not likely to bring credibility but rather marginalization. Crtew ( talk ) 20:24, 6 March 2012 (UTC) [ reply ] I find all this talk about the provenance and prevalence (try saying that three times fast) of "USian" a jarring nonsequitur. It makes me wonder what issue you think is being discussed. The [MASK] about Tibet also invites the term nonsequitur and suggests parties talking past each other. Much of what you write below probably can\'t be properly understood without identifying the miscommunication; in fact, my sense is that until we sort out the miscommunication, all attempts to discuss the original issue (cultural insensitivity) will go sideways. -- Pi zero ( talk ) 21:56, 6 March 2012 (UTC) [ reply ] [MASK] About geography and naming : By calling U.S. citizens “USians,” people are not acknowledging the reason why people there adopted “Americans” in the first place. The "United States" is different than other countries in the western hemisphere or countries around the world in that the first part of its name doesn\'t refer to a place but to its bond as a nation. The "... of America" part, is the place in the full name that is meant to point to a place. Canadians, Mexicans, Brazilians don\'t share this naming situation. The use of the place was decided around the founding of the nation when the country\'s people were emerging from a revolution and beginning to form a nation. So the United States is not "annexing" the term "America" in the present tense. My impression is that the use of the word "USians" is a form of complaint or apology that is directed at the power and influence that the U.S.A. has displayed in more recent history. The vast majority of people around the world use the term "Americans" for U.S. citizens, and there was a history of its use before the argument of cultural imperialism”. Crtew ( talk ) 20:24, 6 March 2012 (UTC) [ reply ] The preposition "of" acknowledges that people who live in the states do not believe they are the only people who occupy or have a claim to "America". Crtew ( talk ) 20:41, 6 March 2012 (UTC) [ reply ] The term USian also has NPOV problems. Crtew ( talk ) 20:56, 6 March 2012 (UTC) [ reply ] [MASK] Or let’s be consistent : “UKians”? Crtew ( talk ) 20:24, 6 March 2012 (UTC) [ reply ] I am not seriously proposing this. Crtew ( talk ) 20:41, 6 March 2012 (UTC) [ reply ] [MASK] Style : I reject the notion in the template that the term "America" for a place or "American" for a people is an insult for reasons stated at the top and above. At the same time, I would agree that it is not precise. "U.S." is the widely accepted adjective. I would tend to prefer the use of U.S. citizen(s) in place of Americans, but I can see using "American(s)" in clear situations (e.g., "Americans turned out to vote today to choose which candidate will be their next president", when the story is clearly about, hypothetically, the U.S. elections.). Alternatively, I could see someone writing "U.S. Americans" for international consumption. Crtew ( talk ) 20:24, 6 March 2012 (UTC) [ reply ] As for the noun, the use of U.S. or U.S.A. is standard and would be the best usage. But when there is obviously no ambiguity, everybody knows that the U.S. is the place mentioned, I see no problem with saying America. Crtew ( talk ) 20:41, 6 March 2012 (UTC) [ reply ] [MASK] Were there no ambiguity that "America" referred to the United States of America then this link wikipedia:America would be a redirect to wikipedia:United States . (I know, Wikipedia not considered a reliable source). So, returning to the " Wikinews articles must be written for an international audience", it must be considered that the audience may include people who could not find the US on a map, but could find the wikipedia:Americas on a map. -- Brian McNeil / talk 22:10, 6 March 2012 (UTC) [ reply ] Just to be absolutely clear here, nobody is proposing using USian within articles! The discussion here seems to have been misinterpreted; my input is intended to, \'pointedly\', drive home the point that "America[n]", on it\'s own, should be avoided. If this is not a given in a journalism course, I\'m surprised. If students aspire to write for, say, The Washington Post , TIME , or seek employment outwith the US, then they will encounter this problem with the use of the word(s) "America[n]". That self-same \'problem\' exists here on Wikinews . -- Brian McNeil / talk 01:45, 7 March 2012 (UTC) [ reply ] [MASK] The template became an issue because a person trying to submit an article used the term "American(s)" and also the stories selected by that person appeared to the editor to be selected primarily for a U.S. audience. I agree with the second part of the editor\'s evaluation; story selection should be undertaken with a global audience in mind. The above discussion only addressed the use of "Americans" as problematic. A template was then placed on the collaboration page about the use of the term "American(s)". The template, in my view, was offensive, but it also stretched Wikinews policy into areas that doesn\'t seem well documented. We need to have a style discussion elsewhere. But as a further point of clarification, the person who wrote the article was summarizing an Associated Press article that used "American(s)" in the article. Moreover, the item "American" is listed in the Associated Press Stylebook as an "acceptable description" for U.S. citizens. Crtew ( talk ) 16:34, 12 March 2012 (UTC) [ reply ] [MASK] Wikipedia search : A search on "American" brings up a disambiguation page, while a search on "Americans" brings up a page that equates "American" with U.S. citizenship. Crtew ( talk ) 16:57, 12 March 2012 (UTC) [ reply ] Wikinews is not Wikipedia . Wikinews is also not the Associated Press, though I\'m sorry to hear they\'ve made that mistake. The template is somewhat acerbic in making its point, but calling it "offensive" puts me in mind of the US radical Christian right claiming religious persecution when told they can\'t use the government to impose their religion on everyone else. I only brought up Wikipedia because it was mentioned above. When I did that search on WP:American, I was actually taken to the United States Project page. Crtew ( talk ) 18:51, 12 March 2012 (UTC) [ reply ] I\'m still boggling over the bit a while back about Tibet. The closest thing to an equivalent situation re Tibet would be if a contingent of Tibetans were insisting that Tibetans, and Tibetans alone, should be identified by the unadorned term "Asian". The point was about self-determination. Each people should decide what they are called. We\'re having a policy discussion. A lopsided one, with you taking one position and experienced Wikinewsies taking another. Afaics, having a policy discussion seems to have been the main reason this deletion nomination was made (as I\'ve pointed out, the nominator could have speedy-deleted the template at any time under a bog-standard speedy-deletion criterion). Ok, let\'s [MASK] it! Crtew ( talk ) 18:51, 12 March 2012 (UTC) [ reply ] Please don\'t take this as a fundamental disagreement with editing at Wikniews. I actually agree with the vast number of edits made and see how the process leads to a better product. This is an issue that doesn\'t require the experience at Wikinews that others do. Let\'s open this up as a style discussion. Crtew ( talk ) 19:52, 12 March 2012 (UTC) [ reply ] Pardon me for being blunt, Crtew. You haven\'t got a leg to stand on, with this. You\'re arguing in favor of not bothering to be precise, and of doing so in a culturally insensitive way. -- Pi zero ( talk ) 17:39, 12 March 2012 (UTC) [ reply ] That\'s not what I\'ve been saying. The term "U.S." with "citizen (or variant)" is specific but can be clunky in use as writers don\'t always refer to a person but a collective noun and other nationalities ("the British") don\'t necessarily have to take the citizen (or variant) as a qaulifier. Most people understand what "American(s)" refers to in cases where there can be no ambiguity. It would be confusing to write about the "American position," for example when talking about a meeting between members of the Organization of American States. However, in a story about a meeting between U.S. and Chinese officials, it wouldn\'t necessarily be wrong or confusing to write the "American position." I didn\'t say it would be confusing, although sometimes it would be. I said it would be imprecise and culturally insensitive . In the hypothetical situation you describe, saying "American position" instead of "US position" would be completely gratuitous; the only (almost certainly — hypotheticals are treacherous) reason to prefer "American position" would be morbid cultural imperialism. -- Pi zero ( talk ) 19:54, 12 March 2012 (UTC) [ reply ] The w:American disambiguation page on Wikipedia does not list U.S. Citizen as first, instead conceding the point my template crudely made: An American is, " A person or attribute of one of the nations of the Americas ". -- Brian McNeil / talk 18:00, 12 March 2012 (UTC) [ reply ] I see that as making my point too! An "American" can refer to people in the hemisphere but also to people from the United States. Both can be true at the same time. To the vast majority of people this is not a contradiction or an insult. To make it an insult is to gloss over history, culture, and linguistic uses of the word in context. Crtew ( talk ) 18:51, 12 March 2012 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Category:Art: Category:Art Art="The arts"=totally redundant to Category:Culture and entertainment . The usage note suggests this is an internal cat, like, say, category:musicians , but I don\'t see it as useful as such. Note that "art" as most often referred to by the word is dealt with by Category:Visual art . BRS (Talk) (Contribs) 19:20, 29 April 2014 (UTC) [ reply ] Comments [MASK] There are some articles in it, despite the usage note about being internal. I suppose that\'s because it wasn\'t clear where else to put those articles. It would only be useful as a internal category if it had a slew of subcats that were thereby prevented from cluttering up Category:Culture and entertainment ; but that doesn\'t appear to be the case. I\'d like to see those articles moved to subcats before it\'s deleted. Otherwise I\'m happy to see the last of it. -- Pi zero ( talk ) 19:57, 29 April 2014 (UTC) [ reply ] [MASK] Seems to me we need a more coherent plan for this part of the category hierarchy. Right now, "Art" has subcats "Architecture", "Galleries", "Graphic art", and "Visual art". But here\'s some of what w:Portal:Visual arts has to say — note, that\'s "arts" rather than "art", and mishandling that distinction may also be part of our problem here: Visual arts are a class of art forms focusing on the creation of works which are primarily visual in nature, such as painting, drawing, illustration, architecture, photography, graphic design, printmaking, and filmmaking. Those that involve moulding or modeling, such as sculpture, public art, and ceramics, are more narrowly referred to as plastic arts. The visual arts are distinguished from the performing arts, language arts, culinary arts and other such classes of artwork, but those boundaries are not well defined. Many artistic endeavors combine aspects of visual arts with one or more non-visual art forms, such as music or spoken word. The part about "not well defined" might suggest we should be rethinking at what level we have our cats. On the other hand, this also raises the possibility we\'d have "Visual art s " in place of the current "Art", with several subcats including "Architecture", "Graphic art s ", "Galeries". The current "Visual art" would be replaced by some other subcat of Visual arts. There could be siblings to "Visual arts" for, say, "Performing arts". Perhaps a subcat of "Visual art" for the plastic arts — and at that point I get really alarmed. "Plastic arts"??? Seriously? Ugh. That\'s a very unnatural name; only an expert in the taxonomy of this stuff would think that involved anything not made from this stuff . I\'m gonna give this some more thought. I\'d really like this resolved; I don\'t fancy deleting a category every once in a while, leaving at least as much of a mess with a somewhat changed shape, and gradually migrating toward losing what organizational information we now have encoded in these cats. -- Pi zero ( talk ) 13:04, 30 April 2014 (UTC) [ reply ] [MASK] Trying to carefully and accurately categorise art is always going to be a pain, because the arts strive to break the boundaries between each other. But, that\'s a problem for the visual art cat and its subcats. For that reason I\'m going to hold off going down this tempting tangent about how to handle it. BRS (Talk) (Contribs) 10:55, 4 May 2014 (UTC) [ reply ] Votes The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Wikinews interviews specialists on China, Iran, Russia support for al-Assad/fr: Not exactly clear consensus, but "[MASK]" is appropriate I believe. — Mike moral ♪♫ 06:37, 29 September 2014 (UTC) [ reply ] Wikinews interviews specialists on China, Iran, Russia support for al-Assad/fr Here is English Wikinews. We don\'t need any article in French.-- GZWDer ( talk ) 10:44, 29 August 2014 (UTC) [ reply ] [MASK] It\'s tagged with Category:No publish ; so, that\'s its status on en.wn. The article has no French interwiki, suggesting the translation never did make it to fr.wn (though I\'ve not yet tried to look on fr.wn directly). Various published articles in our archives have peripheral subpages of different kinds, which are no-published and eventually should be fully protected as a precaution since they\'re in mainspace; I\'ll protect this one for now, and an admin can later either [MASK] it or remove the deletion tag. -- Pi zero ( talk ) 11:29, 29 August 2014 (UTC) [ reply ] [MASK] This, as noted by Pi zero, isn\'t published. On that basis, I don\'t quite follow the nomination phrasing. The article itself was translated into three other languages, and is a local Featured Article . Just, as already noted, looks as-if translation to French wasn\'t completed fast-enough to stay fresh. My inclination, on that basis (a working page, never published, cannot now be completed), is to see this deleted. I\'m just hoping I\'m being overly-sensitive to word-choice in how I read the nomination. -- Brian McNeil / talk 07:56, 3 September 2014 (UTC) [ reply ] On further consideration and consultation, I suggest we [MASK] it. It could conceivably be of use to someone wishing to use the article as a reference. -- Pi zero ( talk ) 13:39, 11 September 2014 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li></ul> |
| 3 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Aussie Foreign Affairs Minister says no change in USA travel advice: -- Pi zero ( talk ) 11:53, 5 May 2013 (UTC) [ reply ] Aussie Foreign Affairs Minister says no change in USA travel advice [ edit ] Established editor started a review, removed a key verifying source. The review was not completed for 24 hours. It appears almost completely certain any subsequent review will have to wait at least that long before getting another review. Demoralizing as a writer and as a reviewer who is feeling overwhelmed by the student situation, and zero intention of resubmitting this article because I know it will not get reviewed in a way that will pass any timeliness criteria. This is one of those cases where writing feels like being set up for failure. Comments [ edit ] Votes [ edit ] Remove -- LauraHale ( talk ) 01:29, 17 April 2013 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: '</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Boston Marathon rocked by 2 explosions near finish line: -- Pi zero ( talk ) 11:52, 5 May 2013 (UTC) [ reply ] Boston Marathon rocked by 2 explosions near finish line [ edit ] Duplicate adding zero to the already-published article. -- Brian McNeil / talk 06:10, 16 April 2013 (UTC) [ reply ] Comments [ edit ] Votes [ edit ] Remove Just needed to get the message across; quit subbing this 'stub' for review. -- Brian McNeil / talk 06:10, 16 April 2013 (UTC) [ reply ] Remove Hopefully it'll be deleted as abandoned before formal DR process can complete, but just as well to have the formal process in parallel. -- Pi zero ( talk ) 12:33, 16 April 2013 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: "</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Wikinews:Story preparation/Evidence of bias: Media coverage of Comey memos: Speedy deleted as prepared article where the event happen more than 5 days ago . Any major media coverage on the Comey memo would have been months ago. — mikemoral ( talk ) 07:40, 13 June 2017 (UTC) [ reply ] Wikinews:Story preparation/Evidence of bias: Media coverage of Comey memos Created in November 2016, about seven months ago. Has not developed since then. The article discusses Hilary Clinton and email circus stuff. I don't see the purpose of keeping it other than... prize possession or something. -- George Ho ( talk ) 01:46, 13 June 2017 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: "</li></ul> |
| 0 | <ul><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Template:Nosource: Creating user nominating for speedy deletion under A7. PokestarFan ( talk ) 19:00, 8 May 2017 (UTC) [ reply ] 7. Self-requests by the author with no third-party edit history. Template:Nosource Not being sourced does not, in itself, fulfil a criterion for deletion, speedy or otherwise. As such, this template is superfluous. Microchip08 ( talk ) 07:33, 8 May 2017 (UTC) [ reply ] Comments Votes Remove I'm in agreement with the nominator. -- Pi zero ( talk ) 14:30, 8 May 2017 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: "</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: User:KuduIO/'Al-Qaeda inspired' bomb plot stopped by Canadian police: The following discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion. User:KuduIO/'Al-Qaeda inspired' bomb plot stopped by Canadian police Userspaced article which has not been revived. Serves no purpose keeping on-wiki. Comments Votes Remove per nom'.-- •–• 12:53, 8 July 2020 (UTC) [ reply ] Remove per nom. Stale draft. Green Giant ( talk ) 23:41, 23 July 2020 (UTC) [ reply ] Deleted -- Green Giant ( talk ) 11:13, 16 August 2020 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: "</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Abundance of stolen health care records on dark web is causing a price collapse: Abundance of stolen health care records on dark web is causing a price collapse This draft has been abandoned twice (8 - 13 October and 15 - 19 October). Since it does not seem appropriate to put a new "abandoned" tag, I feel it needs discussion to see if there is anything that can be done to refocus it or whether it just needs deletion. -- Green Giant ( talk ) 14:22, 19 October 2020 (UTC) [ reply ] Comments Votes Remove in the absence of a refocus. -- Green Giant ( talk ) 14:22, 19 October 2020 (UTC) [ reply ] Deleted -- Green Giant ( talk ) 19:29, 26 October 2020 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: '</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("research-dump/bge-base-en-v1.5_wikinews_masked_wikinews_masked")
# Run inference
preds = model("###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Interviews: Interviews WN:Story preparation/Office hours:Frank Schulenberg Last edited in December 2009. Highly unlikely to be published now. WN:Story preparation/Wikinews Interviews: Jim McGreevey Last edited in September 2007. Highly unlikely to be published now. These have been put together because they are unpublished interviews from several years ago. -- Green Giant ( talk ) 17:54, 16 April 2020 (UTC) [ reply ] Comments Votes The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page's talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 76 | 420.1587 | 3715 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 4 |
| 1 | 37 |
| 2 | 15 |
| 3 | 7 |
### Training Hyperparameters
- batch_size: (8, 2)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (1e-05, 1e-05)
- head_learning_rate: 5e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: True
- use_amp: True
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0063 | 1 | 0.0865 | - |
| 3.1646 | 500 | 0.0671 | 0.4562 |
### Framework Versions
- Python: 3.12.7
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Category:Trucks: Category:Trucks [ edit ] Ehhh, long time, no use of DR. I don\'t expect killing an encyclopedic category like this to be a difficult decision. -- Brian McNeil / talk 19:41, 3 October 2012 (UTC) [ reply ] Comments [ edit ] Just for the truckers in this DR . -- Brian McNeil / talk 22:20, 3 October 2012 (UTC) [ reply ] I\'m \'bundling\' Portal:Trucks/Wikipedia with this, despite the fact that it\'s been set up in a bizarre way that doesn\'t even use the category (i.e. suggests a cut\'n\'paste by someone with no knowledge of DPL). -- Brian McNeil / talk 22:34, 3 October 2012 (UTC) [ reply ] I\'ve no objection to the bundling. Agree it suggests dearth of DPL knowledge. -- Pi zero ( talk ) 22:59, 3 October 2012 (UTC) [ reply ] Hello I creator portal\xa0; w:Portal:Trucks an french category fr:Catégorie:Camion . FrankyLeRoutier ( talk ) 15:57, 11 October 2012 (UTC) [ reply ] Votes [ edit ] [MASK]! [MASK]! [MASK]! 100% encyclopedic category. Who ever saw a "Truck news" section in a paper — other than one that comes in a plain brown wrapper. :P -- Brian McNeil / talk 19:41, 3 October 2012 (UTC) [ reply ] Remove Yup. Encyclopedic through and through. -- Pi zero ( talk ) 20:05, 3 October 2012 (UTC) [ reply ] Remove Confusing. -- LauraHale ( talk ) 03:59, 23 October 2012 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Wikinews:Story preparation/North Korean leader and dictator Kim Jong Il dies at age XX: Wikinews:Story preparation/North Korean leader and dictator Kim Jong Il dies at age XX [ edit ] He died in December 2011, a story that Wikinews didn\'t cover despite having this (unreferenced and clearly forgotten) draft sitting around. As the window of opportunity for this story has been missed, there\'s no point in keeping it around. Bencherlite ( talk ) 12:27, 2 July 2012 (UTC) [ reply ] Comments [ edit ] [MASK] Actually we did cover his death... just not using the prepped article. See North Korean leader Kim Jong-il dead . -- Patrick M (TUFKAAP) ( talk ) 15:40, 5 July 2012 (UTC) [ reply ] Shame that nobody added that article to Category:Kim Jong-il , which is what I checked. Would someone care to do the honours to save me adding a protected edit request? The fact that the death was covered without using the "prepared" story strengthens my view about the general uselessness of such items. Bencherlite ( talk ) 11:28, 11 July 2012 (UTC) [ reply ] Votes [ edit ] Please vote using [MASK] , Remove , Neutral , or [MASK] followed by signature speedy candidate -- Brian McNeil / talk 14:45, 2 July 2012 (UTC) [ reply ] speedy candidate Ooops to missing the story but clearly no longer relevant. -- LauraHale ( talk ) 22:16, 2 July 2012 (UTC) [ reply ] Remove Unsourced drafts are reasonably useless, and trying to compete with the mainstream for big events is largely pointless with the real strength being the wealth of under-reported events that can be reported better. As such, I\'d have been inclined to [MASK] this even if the author(s) hadn\'t forgotten about it and allowed it to stale . Blood Red Sandman (Talk) (Contribs) 17:21, 3 July 2012 (UTC) [ reply ] How much of Category:Prepared stories do you feel like nuking? Most of the stories in there appear to be unsourced drafts, or useless, or both e.g. Wikinews:Story preparation/(Island country) evacuated, vanishes beneath sea , Wikinews:Story preparation/Former French President Jacques Chirac dies , Wikinews:Story preparation/Wikimedia founder Jimmy Wales dies aged XXX . I\'ll tag a few others as "abandoned" and see what happens. Bencherlite ( talk ) 10:49, 4 July 2012 (UTC) [ reply ] We\'ve been mulling over, for some time now, housecleaning the story-preparation section. On the only modern occasions I recall where prepared story templates were actually used, they prevented us from publishing because they had been prepared without sourcing. We may want to consider overhauling the whole concept of the section to make it compatible with review (perhaps with something similar to the "script review" concept we\'ve been using for Crtew\'s weekly audio briefs). -- Pi zero ( talk ) 12:35, 4 July 2012 (UTC) [ reply ] I\'ve tagged over thirty "story preparation" pages as {{ abandoned }}, some dating back over five years. Most of the pre-"prepared" obituaries could be deleted without fuss if we had a new proposed deletion criterion that said something like "5. Prepared stories for future events (including a person\'s death) with inadequate references for existing facts". Thoughts? Bencherlite ( talk ) 14:00, 4 July 2012 (UTC) [ reply ] I could quite happily go with a PD criteria on unsourced prepwork. Probably need an SD on "prepared, unused, event now past". -- Brian McNeil / talk 14:10, 4 July 2012 (UTC) [ reply ] Yes to PD and SD. -- Pi zero ( talk ) 17:07, 4 July 2012 (UTC) [ reply ] I\'m uncomfortable frying long-untouched interview ideas (and, by extension, possible features/investigations) as {{ abandoned }}; two days\' notice seems a tad cruel on things that may or may not be revived. I have just fryed one that was my own from god knows how long ago. Events that have passed seem suitable, and a speedy option seems sensible (I\'ve nommed a pile of such before and they were deleted without controversy). I\'ll make exact suggestions for WN:PROD and WN:SD additions in a minute on the Water Cooler; in answer to the question, I\'d prune a lot of the prepped stuff. Blood Red Sandman (Talk) (Contribs) 19:28, 4 July 2012 (UTC) [ reply ] Remove Prepped stories were good in the pre flagged revs days. Not so much now. Granted they could be useful again as long as they\'re at least backed some sourced information. -- Patrick M (TUFKAAP) ( talk ) 15:36, 5 July 2012 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Template:Citequote: Template:Citequote [ edit ] This is a Wikipedia-style template that doe not fit our workflow, and its existence can only serve to occasionally confuse some newcomer. -- Pi zero ( talk ) Remove as nominator. -- Pi zero ( talk ) 11:37, 25 April 2012 (UTC) [ reply ] Remove Totally Wikipedia-ish. Why did it even end up here? Diego Grez ( talk ) 22:14, 26 April 2012 (UTC) [ reply ] Remove Not something we really have a need for. Blood Red Sandman (Talk) (Contribs) 11:26, 29 April 2012 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page's talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: "</li></ul> |
| 2 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Template:WhatAmerica?: However, there clearly is a consensus that if we are to have this template, we aren\'t to use it in its present form. . Template:WhatAmerica? [ edit ] Bad taste? Probably. This was meant as a rather pointed joke and, seemingly, didn\'t come across that way. However , this is a recurring problem in that United States citizens self-identify as "Americans". With Wikinews intended for an international audience, the point being made rather bluntly by this template needs taken on-board by contributors within the United States. -- Brian McNeil / talk 02:26, 4 March 2012 (UTC) [ reply ] Remove Do [MASK] the template on the grounds of taste as mentioned by the nominator. I\'m all for a separate discussion on this point as a matter of style. Where do we start that? Crtew ( talk ) 02:47, 4 March 2012 (UTC) [ reply ] [MASK] About this template: it is opinion disguised as editing, it is a misrepresentation of Wikinews’ style guide (which may be further grounds for [MASK]), and it is condescending in tone, visual style and substance -- Crtew ( talk ) 16:46, 4 March 2012 (UTC) [ reply ] [MASK] About the term American : One can at the same time hold an international mindset, refer to a people by what they call themselves, and self-identify as an "American" without contradiction. The premise of the template and the self-nomination above is that it wrong to "self-identify" as an American and by doing so a writer doesn\'t have the "proper" mindset. Self-identification : Just because U.S. citizens call themselves "Americans" doesn\'t mean they don\'t recognize that others in the western hemisphere also use the term. That\'s not an issue. The template does not show any kind of understanding for the meaning of the term as it is used in the culture. Self-determination is also a concept that should also be reflected in policy. For example, most would never think of calling Tibetans "Chinese" out of respect for their rights. Like it or not, people who live in the United States use the term "America" and call themselves "Americans," and that\'s not going to change anytime soon. America is even part of the country\'s name (U.S.A.). The term USians , which has been used frequently at Wikinews, is an imaginary term and nobody inside the United States or around the world uses it. Is it the proper use of Wikinews to invent a term and impose it on an entire people? Is Wikinews displaying a counter-bias by raising concerns over the term "American"? Furthermore, I\'ve also seen the term Amerika used at Wikinews. This can be derogatory depending on its source, target and usage, and it similarly should never appear in editing comments. I\'ll raise this in the style discussion later, but if you scan the Wikinews style guide , you will find the term "American" several times. Either editors should change the Wikinews style guide or refrain from bringing their own issues into critiques. -- Crtew ( talk ) 16:46, 4 March 2012 (UTC) [ reply ] [MASK] I\'ve several comments. We are an international publication and, verily, shouldn\'t insult the rest of the supercontinent by calling the US "America". There\'s more to it than that, a whole horribly unhealthy US-centrist mindset. I note that BRS\'s criticism of the article was not limited to the use of the word, but was in fact about omission of significant facts merely because they weren\'t about USians — learning to avoid such bias is part of learning to be a Wikinewsie, and learning not to call USians "Americans" is part of learning to avoid such bias. The template is therefore slightly off-target as it focuses solely on the word usage, while what is likely needed is a template addressing a somewhat wider point; it would be advisable to succinctly mention the word along the way. Since the template is single-author, and "author request" is a speedy-deletion criterion, and the author is an admin, the author in this case can simply [MASK] it. -- Pi zero ( talk ) 12:34, 4 March 2012 (UTC) [ reply ] [MASK] Crtew\'s argument is difficult to follow. So far as I can determine, xyr argument is that it\'s okay for him to use something that some people find insulting becuase other people do it, but if anyone dare respond, that is condescending and unacceptible. I find it difficult to believe someone so clearly intelligent could make such a fallacious argument; so, what am I missing? (It\'s probably something quite obvious, being me.). I also find the concept that Wikinews invented the informal term \'USian\' to be hilarious, as equally do I find the notion that one should not use a(n instantly recognisable) German word in an edit summary. (It is also used in Czech, Danish, Dutch, Hungarian, Lithuanian, Swahili, Turkish, and a few others according to Wiktionary. It is also the title of a song poking mild fun at symbols of US culture.) I find it extraordinary that using an alternative based on the name of a country is \'inflicting\' but using the name of two continents to mean a country is not. I also believe, but do not know, that USian originated amongst USians. Too unknown for mainspace, but instantly udnerstandable outside of it. Equally, the template as-written is more inflamatory than useful and needs modified (and shrunk. It\'s huge.). That is not in itself a reason to [MASK] but it is an issue that prevents it being used as-is. Much though I did smirk at the wording, it isn\'t going to have the desired effect. Where is this meant to be used? On contributors talk pages? That needs figured out. What nobody here seems to be discussing is that the use of \'America\' is a relatively minor issue. It\'s an annoyance and a pet hate, and I do point it out when I see it, but it\'s just a question of finding something less ambiguous. I wouldn\'t use it to mean the American nations collectively, either; same reason. Ambiguity. It\'s a simple case of word-replacing and any templated message should be understated to reflect that. Accordingly, musch downsizing required. Another thing nobody has mentioned is that we all use \'America\' to mean the US. We do. I do. Often without realising it. It\'s usually pretty clear and (apart from maybe a mild dig) I don\'t mind it in discussions etc. Mainspace should strive to be absolutely clear and the easiest thing to do is avoid the word entirely. If we are to take the recent article as an example, there were a lot of other nationalities involved. If one of them was, say, Brazilian then we\'d have real confusion on our hands. Given how ignored the other nationalities were, such a situation wouldn\'t surprise me. We must always remember that the the US anexing of the word \'America\' to mean a single nation is not malicious, and neither are individual uses. It\'s not the best inspiration to cite, that article; it had much more serious problems when I reviewed. I would usually replace \'America\' when editing; if that\'s the only problem with an article, it will tend to get passed. There\'s also a bit of ignorance going on here, which is much more easy to sort out. (We are all ignorant about many things, and WN has a steep learning curve, so there\'s nothing wrong with this. I always hesitate around using \'ignorance\' because of the negative connotations; I\'m not saying it as a put-down.) The only namespace that matters as far as careful phrasing goes is mainspace. When a sense of humour is totally banned I will replace the Main Page with porn, leave, and never return. If we\'re going down the self-identification route, I note there are Amerikans here who self-identify as USians. I also cannot think of any other ambiguous self-identification. The evolution of the word\'s use is, frankly, unfortunate. This is too ungainly for a DR. We should have this on the water cooler, and return here only if we can\'t agree a way to sort this template. Blood Red Sandman (Talk) (Contribs) 17:45, 4 March 2012 (UTC) [ reply ] [MASK] with five minutes work, I\'ve traced use of "USian" as far back as 1987 — the year I first got Internet access. Invented? By-and-large all words are; but this one most-certainly is not a Wikinews invention since this project didn\'t come into existence for another seventeen years. Moving forward a little under two years, I\'m going to excerpt from a discussion that precisely sums up the problems of those outside the United States with attitudes emanating from within the country: [...]\n> > > As for Henry, as a non-citizen\n> > Henry is a citizen of Canada. I am a citizen of Australia.\n> > I fail to see what\n> > the relevence is. The net is not a U.S. dominion.\n> You picked out an insignificant part of my entire paragraph, quoted it\n> out of context, and blew it up far beyond anything I actually said.\nI\'m sorry if you\'re upset, but I\'m tired of the USAcentric attitude that\npervades the net. As an Australian living in, and materially contributing\nto, the United States I\'m exquisitely aware of the casual parochialism\nof many USians. Casual prejudice is prejudice still.\nI\'m also tired of people bashing Henry, or supporting the people who bash\nhim by taking their position. He\'s one of the clearest heads on the net, for\nall his faults. And he has them.\nBut if you must criticise, there are far more valid places to start from\nthan the irrelevant point of his nationality. You\'re just reinforcing\nthe USian attitudes towards foreigners... attitudes that harm the USA, by\nthe way, as much if not more than the foreigners in question.\nIt\'s insignificant to you, but not to me. The fact that it\'s insignificant\nto you is itself part of the problem. Oh well, you probably never thought\nof it as a problem. I trust you will now.\nCan we put this behind us now? — Peter da Silva April 3, 1989. I\'ve quoted this in it\'s entirety, because I\'ve participated in, and seen, this self-same discussion played-out for pretty much the entire 20+ years since Peter\'s complaint as above. It is a deep-seated point of irritation that originates within the United States. The 1987 use? By an \'American\'. An \'American\' somewhat uncomfortable with the claiming of citizenship of the supercontinent. This, perhaps is the crux of the matter. You are saying that Mexicans, Canadians, Columbians, etc., etc. are not \'American citizens\'; they most certainly are! What they are not, is Citizens of the United States. As to the Style Guide? "American" is used frequently as the generally-accepted name of what is " American English " (less-so for "American" date formats), and that\'s somewhere we do not want to have people from this side of the Atlantic taking the discussion and giving their frank and forthright views on the topic. Sorry for the length of this. To call \'throwaway\', imprecise, use of America and/or American Cultural Imperialism is likely putting the case in-extremis; but, reading that Wikipedia article after the above might make it more clear why a great number of uses of \'America[n]\' are close to, if not an outright breach of WN:NPOV . -- Brian McNeil / talk 06:32, 5 March 2012 (UTC) [ reply ] [MASK] More on USian : Alright, I acknowledge I was inaccurate above and that the term "USian" has apparently been around for many years. My intended point, though, was that hardly anybody uses it. My Google search brought up "Urban Dictionary," which is not a reliable source, and others that had the same problem. The Wiktionary entry designates "USian" as "rare". Personally, I have heard exactly zero people use this word outside of Wikinews. The use of a term that is out of the mainstream is not likely to bring credibility but rather marginalization. Crtew ( talk ) 20:24, 6 March 2012 (UTC) [ reply ] I find all this talk about the provenance and prevalence (try saying that three times fast) of "USian" a jarring nonsequitur. It makes me wonder what issue you think is being discussed. The [MASK] about Tibet also invites the term nonsequitur and suggests parties talking past each other. Much of what you write below probably can\'t be properly understood without identifying the miscommunication; in fact, my sense is that until we sort out the miscommunication, all attempts to discuss the original issue (cultural insensitivity) will go sideways. -- Pi zero ( talk ) 21:56, 6 March 2012 (UTC) [ reply ] [MASK] About geography and naming : By calling U.S. citizens “USians,” people are not acknowledging the reason why people there adopted “Americans” in the first place. The "United States" is different than other countries in the western hemisphere or countries around the world in that the first part of its name doesn\'t refer to a place but to its bond as a nation. The "... of America" part, is the place in the full name that is meant to point to a place. Canadians, Mexicans, Brazilians don\'t share this naming situation. The use of the place was decided around the founding of the nation when the country\'s people were emerging from a revolution and beginning to form a nation. So the United States is not "annexing" the term "America" in the present tense. My impression is that the use of the word "USians" is a form of complaint or apology that is directed at the power and influence that the U.S.A. has displayed in more recent history. The vast majority of people around the world use the term "Americans" for U.S. citizens, and there was a history of its use before the argument of cultural imperialism”. Crtew ( talk ) 20:24, 6 March 2012 (UTC) [ reply ] The preposition "of" acknowledges that people who live in the states do not believe they are the only people who occupy or have a claim to "America". Crtew ( talk ) 20:41, 6 March 2012 (UTC) [ reply ] The term USian also has NPOV problems. Crtew ( talk ) 20:56, 6 March 2012 (UTC) [ reply ] [MASK] Or let’s be consistent : “UKians”? Crtew ( talk ) 20:24, 6 March 2012 (UTC) [ reply ] I am not seriously proposing this. Crtew ( talk ) 20:41, 6 March 2012 (UTC) [ reply ] [MASK] Style : I reject the notion in the template that the term "America" for a place or "American" for a people is an insult for reasons stated at the top and above. At the same time, I would agree that it is not precise. "U.S." is the widely accepted adjective. I would tend to prefer the use of U.S. citizen(s) in place of Americans, but I can see using "American(s)" in clear situations (e.g., "Americans turned out to vote today to choose which candidate will be their next president", when the story is clearly about, hypothetically, the U.S. elections.). Alternatively, I could see someone writing "U.S. Americans" for international consumption. Crtew ( talk ) 20:24, 6 March 2012 (UTC) [ reply ] As for the noun, the use of U.S. or U.S.A. is standard and would be the best usage. But when there is obviously no ambiguity, everybody knows that the U.S. is the place mentioned, I see no problem with saying America. Crtew ( talk ) 20:41, 6 March 2012 (UTC) [ reply ] [MASK] Were there no ambiguity that "America" referred to the United States of America then this link wikipedia:America would be a redirect to wikipedia:United States . (I know, Wikipedia not considered a reliable source). So, returning to the " Wikinews articles must be written for an international audience", it must be considered that the audience may include people who could not find the US on a map, but could find the wikipedia:Americas on a map. -- Brian McNeil / talk 22:10, 6 March 2012 (UTC) [ reply ] Just to be absolutely clear here, nobody is proposing using USian within articles! The discussion here seems to have been misinterpreted; my input is intended to, \'pointedly\', drive home the point that "America[n]", on it\'s own, should be avoided. If this is not a given in a journalism course, I\'m surprised. If students aspire to write for, say, The Washington Post , TIME , or seek employment outwith the US, then they will encounter this problem with the use of the word(s) "America[n]". That self-same \'problem\' exists here on Wikinews . -- Brian McNeil / talk 01:45, 7 March 2012 (UTC) [ reply ] [MASK] The template became an issue because a person trying to submit an article used the term "American(s)" and also the stories selected by that person appeared to the editor to be selected primarily for a U.S. audience. I agree with the second part of the editor\'s evaluation; story selection should be undertaken with a global audience in mind. The above discussion only addressed the use of "Americans" as problematic. A template was then placed on the collaboration page about the use of the term "American(s)". The template, in my view, was offensive, but it also stretched Wikinews policy into areas that doesn\'t seem well documented. We need to have a style discussion elsewhere. But as a further point of clarification, the person who wrote the article was summarizing an Associated Press article that used "American(s)" in the article. Moreover, the item "American" is listed in the Associated Press Stylebook as an "acceptable description" for U.S. citizens. Crtew ( talk ) 16:34, 12 March 2012 (UTC) [ reply ] [MASK] Wikipedia search : A search on "American" brings up a disambiguation page, while a search on "Americans" brings up a page that equates "American" with U.S. citizenship. Crtew ( talk ) 16:57, 12 March 2012 (UTC) [ reply ] Wikinews is not Wikipedia . Wikinews is also not the Associated Press, though I\'m sorry to hear they\'ve made that mistake. The template is somewhat acerbic in making its point, but calling it "offensive" puts me in mind of the US radical Christian right claiming religious persecution when told they can\'t use the government to impose their religion on everyone else. I only brought up Wikipedia because it was mentioned above. When I did that search on WP:American, I was actually taken to the United States Project page. Crtew ( talk ) 18:51, 12 March 2012 (UTC) [ reply ] I\'m still boggling over the bit a while back about Tibet. The closest thing to an equivalent situation re Tibet would be if a contingent of Tibetans were insisting that Tibetans, and Tibetans alone, should be identified by the unadorned term "Asian". The point was about self-determination. Each people should decide what they are called. We\'re having a policy discussion. A lopsided one, with you taking one position and experienced Wikinewsies taking another. Afaics, having a policy discussion seems to have been the main reason this deletion nomination was made (as I\'ve pointed out, the nominator could have speedy-deleted the template at any time under a bog-standard speedy-deletion criterion). Ok, let\'s [MASK] it! Crtew ( talk ) 18:51, 12 March 2012 (UTC) [ reply ] Please don\'t take this as a fundamental disagreement with editing at Wikniews. I actually agree with the vast number of edits made and see how the process leads to a better product. This is an issue that doesn\'t require the experience at Wikinews that others do. Let\'s open this up as a style discussion. Crtew ( talk ) 19:52, 12 March 2012 (UTC) [ reply ] Pardon me for being blunt, Crtew. You haven\'t got a leg to stand on, with this. You\'re arguing in favor of not bothering to be precise, and of doing so in a culturally insensitive way. -- Pi zero ( talk ) 17:39, 12 March 2012 (UTC) [ reply ] That\'s not what I\'ve been saying. The term "U.S." with "citizen (or variant)" is specific but can be clunky in use as writers don\'t always refer to a person but a collective noun and other nationalities ("the British") don\'t necessarily have to take the citizen (or variant) as a qaulifier. Most people understand what "American(s)" refers to in cases where there can be no ambiguity. It would be confusing to write about the "American position," for example when talking about a meeting between members of the Organization of American States. However, in a story about a meeting between U.S. and Chinese officials, it wouldn\'t necessarily be wrong or confusing to write the "American position." I didn\'t say it would be confusing, although sometimes it would be. I said it would be imprecise and culturally insensitive . In the hypothetical situation you describe, saying "American position" instead of "US position" would be completely gratuitous; the only (almost certainly — hypotheticals are treacherous) reason to prefer "American position" would be morbid cultural imperialism. -- Pi zero ( talk ) 19:54, 12 March 2012 (UTC) [ reply ] The w:American disambiguation page on Wikipedia does not list U.S. Citizen as first, instead conceding the point my template crudely made: An American is, " A person or attribute of one of the nations of the Americas ". -- Brian McNeil / talk 18:00, 12 March 2012 (UTC) [ reply ] I see that as making my point too! An "American" can refer to people in the hemisphere but also to people from the United States. Both can be true at the same time. To the vast majority of people this is not a contradiction or an insult. To make it an insult is to gloss over history, culture, and linguistic uses of the word in context. Crtew ( talk ) 18:51, 12 March 2012 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Category:Art: Category:Art Art="The arts"=totally redundant to Category:Culture and entertainment . The usage note suggests this is an internal cat, like, say, category:musicians , but I don\'t see it as useful as such. Note that "art" as most often referred to by the word is dealt with by Category:Visual art . BRS (Talk) (Contribs) 19:20, 29 April 2014 (UTC) [ reply ] Comments [MASK] There are some articles in it, despite the usage note about being internal. I suppose that\'s because it wasn\'t clear where else to put those articles. It would only be useful as a internal category if it had a slew of subcats that were thereby prevented from cluttering up Category:Culture and entertainment ; but that doesn\'t appear to be the case. I\'d like to see those articles moved to subcats before it\'s deleted. Otherwise I\'m happy to see the last of it. -- Pi zero ( talk ) 19:57, 29 April 2014 (UTC) [ reply ] [MASK] Seems to me we need a more coherent plan for this part of the category hierarchy. Right now, "Art" has subcats "Architecture", "Galleries", "Graphic art", and "Visual art". But here\'s some of what w:Portal:Visual arts has to say — note, that\'s "arts" rather than "art", and mishandling that distinction may also be part of our problem here: Visual arts are a class of art forms focusing on the creation of works which are primarily visual in nature, such as painting, drawing, illustration, architecture, photography, graphic design, printmaking, and filmmaking. Those that involve moulding or modeling, such as sculpture, public art, and ceramics, are more narrowly referred to as plastic arts. The visual arts are distinguished from the performing arts, language arts, culinary arts and other such classes of artwork, but those boundaries are not well defined. Many artistic endeavors combine aspects of visual arts with one or more non-visual art forms, such as music or spoken word. The part about "not well defined" might suggest we should be rethinking at what level we have our cats. On the other hand, this also raises the possibility we\'d have "Visual art s " in place of the current "Art", with several subcats including "Architecture", "Graphic art s ", "Galeries". The current "Visual art" would be replaced by some other subcat of Visual arts. There could be siblings to "Visual arts" for, say, "Performing arts". Perhaps a subcat of "Visual art" for the plastic arts — and at that point I get really alarmed. "Plastic arts"??? Seriously? Ugh. That\'s a very unnatural name; only an expert in the taxonomy of this stuff would think that involved anything not made from this stuff . I\'m gonna give this some more thought. I\'d really like this resolved; I don\'t fancy deleting a category every once in a while, leaving at least as much of a mess with a somewhat changed shape, and gradually migrating toward losing what organizational information we now have encoded in these cats. -- Pi zero ( talk ) 13:04, 30 April 2014 (UTC) [ reply ] [MASK] Trying to carefully and accurately categorise art is always going to be a pain, because the arts strive to break the boundaries between each other. But, that\'s a problem for the visual art cat and its subcats. For that reason I\'m going to hold off going down this tempting tangent about how to handle it. BRS (Talk) (Contribs) 10:55, 4 May 2014 (UTC) [ reply ] Votes The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Wikinews interviews specialists on China, Iran, Russia support for al-Assad/fr: Not exactly clear consensus, but "[MASK]" is appropriate I believe. — Mike moral ♪♫ 06:37, 29 September 2014 (UTC) [ reply ] Wikinews interviews specialists on China, Iran, Russia support for al-Assad/fr Here is English Wikinews. We don\'t need any article in French.-- GZWDer ( talk ) 10:44, 29 August 2014 (UTC) [ reply ] [MASK] It\'s tagged with Category:No publish ; so, that\'s its status on en.wn. The article has no French interwiki, suggesting the translation never did make it to fr.wn (though I\'ve not yet tried to look on fr.wn directly). Various published articles in our archives have peripheral subpages of different kinds, which are no-published and eventually should be fully protected as a precaution since they\'re in mainspace; I\'ll protect this one for now, and an admin can later either [MASK] it or remove the deletion tag. -- Pi zero ( talk ) 11:29, 29 August 2014 (UTC) [ reply ] [MASK] This, as noted by Pi zero, isn\'t published. On that basis, I don\'t quite follow the nomination phrasing. The article itself was translated into three other languages, and is a local Featured Article . Just, as already noted, looks as-if translation to French wasn\'t completed fast-enough to stay fresh. My inclination, on that basis (a working page, never published, cannot now be completed), is to see this deleted. I\'m just hoping I\'m being overly-sensitive to word-choice in how I read the nomination. -- Brian McNeil / talk 07:56, 3 September 2014 (UTC) [ reply ] On further consideration and consultation, I suggest we [MASK] it. It could conceivably be of use to someone wishing to use the article as a reference. -- Pi zero ( talk ) 13:39, 11 September 2014 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page\'s talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: '</li></ul> |
| 3 | <ul><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Aussie Foreign Affairs Minister says no change in USA travel advice: -- Pi zero ( talk ) 11:53, 5 May 2013 (UTC) [ reply ] Aussie Foreign Affairs Minister says no change in USA travel advice [ edit ] Established editor started a review, removed a key verifying source. The review was not completed for 24 hours. It appears almost completely certain any subsequent review will have to wait at least that long before getting another review. Demoralizing as a writer and as a reviewer who is feeling overwhelmed by the student situation, and zero intention of resubmitting this article because I know it will not get reviewed in a way that will pass any timeliness criteria. This is one of those cases where writing feels like being set up for failure. Comments [ edit ] Votes [ edit ] Remove -- LauraHale ( talk ) 01:29, 17 April 2013 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: '</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Boston Marathon rocked by 2 explosions near finish line: -- Pi zero ( talk ) 11:52, 5 May 2013 (UTC) [ reply ] Boston Marathon rocked by 2 explosions near finish line [ edit ] Duplicate adding zero to the already-published article. -- Brian McNeil / talk 06:10, 16 April 2013 (UTC) [ reply ] Comments [ edit ] Votes [ edit ] Remove Just needed to get the message across; quit subbing this 'stub' for review. -- Brian McNeil / talk 06:10, 16 April 2013 (UTC) [ reply ] Remove Hopefully it'll be deleted as abandoned before formal DR process can complete, but just as well to have the formal process in parallel. -- Pi zero ( talk ) 12:33, 16 April 2013 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: "</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Wikinews:Story preparation/Evidence of bias: Media coverage of Comey memos: Speedy deleted as prepared article where the event happen more than 5 days ago . Any major media coverage on the Comey memo would have been months ago. — mikemoral ( talk ) 07:40, 13 June 2017 (UTC) [ reply ] Wikinews:Story preparation/Evidence of bias: Media coverage of Comey memos Created in November 2016, about seven months ago. Has not developed since then. The article discusses Hilary Clinton and email circus stuff. I don't see the purpose of keeping it other than... prize possession or something. -- George Ho ( talk ) 01:46, 13 June 2017 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: "</li></ul> |
| 0 | <ul><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Template:Nosource: Creating user nominating for speedy deletion under A7. PokestarFan ( talk ) 19:00, 8 May 2017 (UTC) [ reply ] 7. Self-requests by the author with no third-party edit history. Template:Nosource Not being sourced does not, in itself, fulfil a criterion for deletion, speedy or otherwise. As such, this template is superfluous. Microchip08 ( talk ) 07:33, 8 May 2017 (UTC) [ reply ] Comments Votes Remove I'm in agreement with the nominator. -- Pi zero ( talk ) 14:30, 8 May 2017 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: "</li><li>"###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: User:KuduIO/'Al-Qaeda inspired' bomb plot stopped by Canadian police: The following discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion. User:KuduIO/'Al-Qaeda inspired' bomb plot stopped by Canadian police Userspaced article which has not been revived. Serves no purpose keeping on-wiki. Comments Votes Remove per nom'.-- •–• 12:53, 8 July 2020 (UTC) [ reply ] Remove per nom. Stale draft. Green Giant ( talk ) 23:41, 23 July 2020 (UTC) [ reply ] Deleted -- Green Giant ( talk ) 11:13, 16 August 2020 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: "</li><li>'###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Abundance of stolen health care records on dark web is causing a price collapse: Abundance of stolen health care records on dark web is causing a price collapse This draft has been abandoned twice (8 - 13 October and 15 - 19 October). Since it does not seem appropriate to put a new "abandoned" tag, I feel it needs discussion to see if there is anything that can be done to refocus it or whether it just needs deletion. -- Green Giant ( talk ) 14:22, 19 October 2020 (UTC) [ reply ] Comments Votes Remove in the absence of a refocus. -- Green Giant ( talk ) 14:22, 19 October 2020 (UTC) [ reply ] Deleted -- Green Giant ( talk ) 19:29, 26 October 2020 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: '</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("research-dump/bge-base-en-v1.5_wikinews_masked_wikinews_masked")
# Run inference
preds = model("###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Interviews: Interviews WN:Story preparation/Office hours:Frank Schulenberg Last edited in December 2009. Highly unlikely to be published now. WN:Story preparation/Wikinews Interviews: Jim McGreevey Last edited in September 2007. Highly unlikely to be published now. These have been put together because they are unpublished interviews from several years ago. -- Green Giant ( talk ) 17:54, 16 April 2020 (UTC) [ reply ] Comments Votes The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page's talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 76 | 420.1587 | 3715 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 4 |
| 1 | 37 |
| 2 | 15 |
| 3 | 7 |
### Training Hyperparameters
- batch_size: (8, 2)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (1e-05, 1e-05)
- head_learning_rate: 5e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: True
- use_amp: True
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0063 | 1 | 0.0865 | - |
| 3.1646 | 500 | 0.0671 | 0.4562 |
### Framework Versions
- Python: 3.12.7
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Portal:Georgia: Portal:Georgia It seems the day of the disambiguation portal has been and gone. No further use as we can and do use mainspace disambig pages now. BRS (Talk) (Contribs) 13:57, 4 May 2014 (UTC) [ reply ] Comments Votes Remove Somehow we've recently fallen out of the very-long-standing habit of putting these deletion nums under sections by date. I suspect this is a matter of not having a tool that helps with the technical operation of adding noms (I hope eventually to have such assistance for creating, closing, and archiving various kinds of noms, keeping in mind there are technical idiosyncrasies to particular kinds of noms). -- Pi zero ( talk ) 14:29, 4 May 2014 (UTC) [ reply ] [MASK] There's a framework in place for migrating to subpages (which would semi-automate all the opening and closing and archiving and {{ votings }} and things), but that's something for the talk page. Date sections make less sense when there's very few deletion proposals each week. Microchip08 ( talk ) 07:49, 6 May 2014 (UTC) [ reply ] They're handy for keeping track of when noms took place, which is useful all around. And taking it all to the talk page seems needlessly bureaucratic given, as you say, the low traffic involved. -- Pi zero ( talk ) 12:40, 6 May 2014 (UTC) [ reply ] [MASK] ; it does no harm, especially as the mainspace Georgia disambiguates between categories and not these portals. This only really needs to go if one of the Georgia portals gets deleted; at the moment it's serving a useful purpose. Microchip08 ( talk ) 07:49, 6 May 2014 (UTC) [ reply ] [MASK] Portals as an entire class of pages on en.wn are deprecated in favor of categories. -- Pi zero ( talk ) 12:40, 6 May 2014 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page's talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: "}, {"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: User:Azhar sabri: I'm not sure why this DR was opened as there was a speedy deletion request on the page, which I've now actioned. BRS (Talk) (Contribs) 16:40, 13 April 2014 (UTC) [ reply ] User:Azhar sabri In my opinion, this seems like inappropriate use of the userspace based on my observations found here and here where there seemed to be a widespread \"spam\" attack of sorts. I've tried to warn the user to stick to the missions of English Wikinews, but I'm not sure if it's an appropriate attempt at communication or not given the circumstances, so I would like the community here to review that too. TeleComNasSprVen ( talk ) 20:20, 6 March 2014 (UTC) [ reply ] Votes The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page's talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: "}, {"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Interviews: Interviews WN:Story preparation/Office hours:Frank Schulenberg Last edited in December 2009. Highly unlikely to be published now. WN:Story preparation/Wikinews Interviews: Jim McGreevey Last edited in September 2007. Highly unlikely to be published now. These have been put together because they are unpublished interviews from several years ago. -- Green Giant ( talk ) 17:54, 16 April 2020 (UTC) [ reply ] Comments Votes The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page's talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: "}, {"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: AZOperator: Files uploaded by AZOperator ( t · c · b ) for use in US: Judge enters 'not guilty' for Florida school shooting suspect, audio/video forced release with FBI timeline File:Nikolas Cruz campus movements.png File:Suspect Movement in Building 12.png It is my opinion, these images do not qualify for a fair use exception with regards to reusing copyrighted images. The uploader has stated they were found on Commons , however, if they were there, I cannot locate them now. The aspect of the story they are used to illustrate is by no means 'breaking news' furthermore it would be easy to create free versions by anyone with a graphics program, a free satellite photo, and a few hours of spare time. As best as I can determine, the images are the work of the Sun-Sentinel as seen in this article . Thank you for your time, -- SVT Cobra 19:26, 17 March 2018 (UTC) [ reply ] Comments Hello SVTCobra. \"a few hours of spare time\" is a wonderful description of one of the resources that journalistic work requires. :-) Concur that the images may be reproduced by hand to convey the same information (or even more) in a free way. -- Gryllida ( talk ) 02:41, 19 March 2018 (UTC) [ reply ] Votes Remove as argued. Article works well without these images and shouldn't contain non-free images. -- Gwyndon ( talk ) 02:33, 19 March 2018 (UTC) [ reply ] Remove per my own nomination. -- SVT Cobra 14:09, 22 March 2018 (UTC) [ reply ] Remove per nomination and because the article has been deleted. Green Giant ( talk ) 22:05, 10 April 2018 (UTC) [ reply ] Remove as per nominator. The article does not exist. Harsh Rathod Poke me! 04:46, 25 June 2018 (UTC) [ reply ] The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as this page's talk page , admin action alerts or the talk page of the nominated article). No further edits should be made to this page. ###Output: "}, {"text": "###Instruction: Multi-class classification, answer with one of the labels: [delete, keep, speedy delete, comment] : ###Input: Userspace drafts: Userspace drafts I’ve come across a large number of draft articles in userspace. Most seem to be very stale but do not fit into a speedy deletion criterion. I’m listing them here in batches for community scrutiny. The first batch are all in anonymous IP userspace: User:194.19.60.4/Norwegian Broadcasting (P3) broadcast story about Muhammed as pig Last edit was February 2006. User:194.247.230.196/Wikipedia is islamophobic Last edit was March 2006. User:202.69.163.8/Philippines Last edit was March 2008. User:210.211.194.223/HAS HUMANITY A FUTURE Last edit was February 2006. User:41.204.73.49/Cameroon 2011 presidential election Last edit was October 2011. User:66.65.46.8/Reaction to Danish Cartoons Last edit was February 2006. Thank you for reading this far. -- Green Giant ( talk ) 18:14, 30 July 2021 (UTC) [ reply ] Comments Votes [MASK] - as nominator. -- Green Giant ( talk ) 18:14, 30 July 2021 (UTC) [ reply ] No objections in two weeks. The result was to [MASK] these pages. -- Cromium ( talk ) 01:26, 13 August 2021 (UTC) [ reply ] The above discussion is preserved as an archive . Please do not modify it. Subsequent comments should be made on the appropriate discussion page, such as the current discussion page. No further edits should be made to this discussion. ###Output: "}], "inference": true}
|
task
|
[
"TEXT_CLASSIFICATION",
"TRANSLATION"
] | 45,281 |
gaudi/opus-mt-en-sg-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-18T15:04:03Z |
2024-10-19T00:28:58+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-sg)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-sg).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-sg --output_dir ./ctranslate2/opus-mt-en-sg-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-sg-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-sg-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-sg-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-sg) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-sg)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-sg).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-sg --output_dir ./ctranslate2/opus-mt-en-sg-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-sg-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-sg-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-sg-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-sg) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 45,282 |
lemon-mint/gemma-2b-translation-v0.102
|
lemon-mint
|
text-generation
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"pytorch",
"instruct",
"finetune",
"translation",
"conversational",
"ko",
"dataset:traintogpb/aihub-flores-koen-integrated-sparta-30k",
"base_model:beomi/gemma-ko-2b",
"base_model:finetune:beomi/gemma-ko-2b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-18T01:21:10Z |
2024-04-18T04:08:55+00:00
| 15 | 0 |
---
base_model: beomi/gemma-ko-2b
datasets:
- traintogpb/aihub-flores-koen-integrated-sparta-30k
language:
- ko
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- gemma
- pytorch
- instruct
- finetune
- translation
widget:
- messages:
- role: user
content: Hamsters don't eat cats.
inference:
parameters:
max_new_tokens: 2048
---
# Gemma 2B Translation v0.102
- Eval Loss: `1.35643`
- Train Loss: `1.46109`
- lr: `3e-05`
- optimizer: adamw
- lr_scheduler_type: cosine
## Prompt Template
```
<bos>### English
Hamsters don't eat cats.
### Korean
햄스터는 고양이를 먹지 않습니다.<eos>
```
## Model Description
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [beomi/gemma-ko-2b](https://huggingface.co/beomi/gemma-ko-2b)
| null |
Non_BioNLP
|
# Gemma 2B Translation v0.102
- Eval Loss: `1.35643`
- Train Loss: `1.46109`
- lr: `3e-05`
- optimizer: adamw
- lr_scheduler_type: cosine
## Prompt Template
```
<bos>### English
Hamsters don't eat cats.
### Korean
햄스터는 고양이를 먹지 않습니다.<eos>
```
## Model Description
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [beomi/gemma-ko-2b](https://huggingface.co/beomi/gemma-ko-2b)
|
{"base_model": "beomi/gemma-ko-2b", "datasets": ["traintogpb/aihub-flores-koen-integrated-sparta-30k"], "language": ["ko"], "library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "tags": ["gemma", "pytorch", "instruct", "finetune", "translation"], "widget": [{"messages": [{"role": "user", "content": "Hamsters don't eat cats."}]}], "inference": {"parameters": {"max_new_tokens": 2048}}}
|
task
|
[
"TRANSLATION"
] | 45,284 |
Unbabel/wmt22-unite-da
|
Unbabel
|
translation
|
[
"translation",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2305.11806",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-05-08T16:11:42Z |
2023-07-19T22:13:46+00:00
| 0 | 2 |
---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: cc-by-nc-sa-4.0
pipeline_tag: translation
---
This is a [COMET](https://github.com/Unbabel/COMET) evaluation model: It receives a triplet with (source sentence, translation, reference translation) and returns a score that reflects the quality of the translation compared to both source and reference.
This model was developed using [UniTE](https://aclanthology.org/2022.acl-long.558/) architecture but using the same data and hyperparameters from [Unbabel/wmt22-comet-da](https://huggingface.co/Unbabel/wmt22-comet-da).
We build this model for our paper: [The Inside Story](https://arxiv.org/pdf/2305.11806.pdf) (Rei et al., ACL 2023)
# Paper
- [The Inside Story: Towards Better Understanding of Machine Translation Neural Evaluation Metrics](https://arxiv.org/pdf/2305.11806.pdf) (Rei et al., ACL 2023)
- [UniTE: Unified Translation Evaluation](https://aclanthology.org/2022.acl-long.558/) (Wan et al., ACL 2022)
- [COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task](https://aclanthology.org/2022.wmt-1.52) (Rei et al., WMT 2022)
# License
cc-by-nc-sa-4.0
# Usage (unbabel-comet)
Using this model requires unbabel-comet to be installed:
```bash
pip install --upgrade pip # ensures that pip is current
pip install unbabel-comet
```
Then you can use it through comet CLI:
```bash
comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model Unbabel/wmt22-comet-da
```
Or using Python:
```python
from comet import download_model, load_from_checkpoint
model_path = download_model("Unbabel/wmt22-unite-da")
model = load_from_checkpoint(model_path)
data = [
{
"src": "Dem Feuer konnte Einhalt geboten werden",
"mt": "The fire could be stopped",
"ref": "They were able to control the fire."
},
{
"src": "Schulen und Kindergärten wurden eröffnet.",
"mt": "Schools and kindergartens were open",
"ref": "Schools and kindergartens opened"
}
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
```
# Intended uses
Our model is intented to be used for **MT evaluation**.
Given a a triplet with (source sentence, translation, reference translation) outputs three scores that reflect the translation quality according to different inputs:
- source score: [`mt`, `src`]
- reference score: [`mt`, `ref`]
- unified score: [`mt`, `src`, `ref`]
-
# Languages Covered:
This model builds on top of XLM-R which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
| null |
Non_BioNLP
|
This is a [COMET](https://github.com/Unbabel/COMET) evaluation model: It receives a triplet with (source sentence, translation, reference translation) and returns a score that reflects the quality of the translation compared to both source and reference.
This model was developed using [UniTE](https://aclanthology.org/2022.acl-long.558/) architecture but using the same data and hyperparameters from [Unbabel/wmt22-comet-da](https://huggingface.co/Unbabel/wmt22-comet-da).
We build this model for our paper: [The Inside Story](https://arxiv.org/pdf/2305.11806.pdf) (Rei et al., ACL 2023)
# Paper
- [The Inside Story: Towards Better Understanding of Machine Translation Neural Evaluation Metrics](https://arxiv.org/pdf/2305.11806.pdf) (Rei et al., ACL 2023)
- [UniTE: Unified Translation Evaluation](https://aclanthology.org/2022.acl-long.558/) (Wan et al., ACL 2022)
- [COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task](https://aclanthology.org/2022.wmt-1.52) (Rei et al., WMT 2022)
# License
cc-by-nc-sa-4.0
# Usage (unbabel-comet)
Using this model requires unbabel-comet to be installed:
```bash
pip install --upgrade pip # ensures that pip is current
pip install unbabel-comet
```
Then you can use it through comet CLI:
```bash
comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model Unbabel/wmt22-comet-da
```
Or using Python:
```python
from comet import download_model, load_from_checkpoint
model_path = download_model("Unbabel/wmt22-unite-da")
model = load_from_checkpoint(model_path)
data = [
{
"src": "Dem Feuer konnte Einhalt geboten werden",
"mt": "The fire could be stopped",
"ref": "They were able to control the fire."
},
{
"src": "Schulen und Kindergärten wurden eröffnet.",
"mt": "Schools and kindergartens were open",
"ref": "Schools and kindergartens opened"
}
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
```
# Intended uses
Our model is intented to be used for **MT evaluation**.
Given a a triplet with (source sentence, translation, reference translation) outputs three scores that reflect the translation quality according to different inputs:
- source score: [`mt`, `src`]
- reference score: [`mt`, `ref`]
- unified score: [`mt`, `src`, `ref`]
-
# Languages Covered:
This model builds on top of XLM-R which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
|
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"], "license": "cc-by-nc-sa-4.0", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 45,285 |
4bit/Qwen2-VL-2B-Instruct
|
4bit
|
text-generation
|
[
"safetensors",
"qwen2_vl",
"multimodal",
"text-generation",
"conversational",
"en",
"arxiv:2308.12966",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"region:us"
] | 2024-08-30T02:34:09Z |
2024-08-30T02:36:45+00:00
| 17 | 1 |
---
base_model: Qwen/Qwen2-VL-2B-Instruct
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- multimodal
---
# Qwen2-VL-2B-Instruct
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 2B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-2B | MiniCPM-V 2.0 | **Qwen2-VL-2B** |
| :--- | :---: | :---: | :---: |
| DocVQA<sub>test</sub> | 86.9 | - | **90.1** |
| InfoVQA<sub>test</sub> | 58.9 | - | **65.5** |
| ChartQA<sub>test</sub> | **76.2** | - | 73.5 |
| TextVQA<sub>val</sub> | 73.4 | - | **79.7** |
| OCRBench | 781 | 605 | **794** |
| MTVQA | - | - | **20.0** |
| MMMU<sub>val</sub> | 36.3 | 38.2 | **41.1** |
| RealWorldQA | 57.3 | 55.8 | **62.9** |
| MME<sub>sum</sub> | **1876.8** | 1808.6 | 1872.0 |
| MMBench-EN<sub>test</sub> | 73.2 | 69.1 | **74.9** |
| MMBench-CN<sub>test</sub> | 70.9 | 66.5 | **73.5** |
| MMBench-V1.1<sub>test</sub> | 69.6 | 65.8 | **72.2** |
| MMT-Bench<sub>test</sub> | - | - | **54.5** |
| MMStar | **49.8** | 39.1 | 48.0 |
| MMVet<sub>GPT-4-Turbo</sub> | 39.7 | 41.0 | **49.5** |
| HallBench<sub>avg</sub> | 38.0 | 36.1 | **41.7** |
| MathVista<sub>testmini</sub> | **46.0** | 39.8 | 43.0 |
| MathVision | - | - | **12.4** |
### Video Benchmarks
| Benchmark | **Qwen2-VL-2B** |
| :--- | :---: |
| MVBench | **63.2** |
| PerceptionTest<sub>test</sub> | **53.9** |
| EgoSchema<sub>test</sub> | **54.9** |
| Video-MME<sub>wo/w subs</sub> | **55.6**/**60.4** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-2B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2-VL,
title={Qwen2-VL},
author={Qwen team},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
| null |
Non_BioNLP
|
# Qwen2-VL-2B-Instruct
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 2B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-2B | MiniCPM-V 2.0 | **Qwen2-VL-2B** |
| :--- | :---: | :---: | :---: |
| DocVQA<sub>test</sub> | 86.9 | - | **90.1** |
| InfoVQA<sub>test</sub> | 58.9 | - | **65.5** |
| ChartQA<sub>test</sub> | **76.2** | - | 73.5 |
| TextVQA<sub>val</sub> | 73.4 | - | **79.7** |
| OCRBench | 781 | 605 | **794** |
| MTVQA | - | - | **20.0** |
| MMMU<sub>val</sub> | 36.3 | 38.2 | **41.1** |
| RealWorldQA | 57.3 | 55.8 | **62.9** |
| MME<sub>sum</sub> | **1876.8** | 1808.6 | 1872.0 |
| MMBench-EN<sub>test</sub> | 73.2 | 69.1 | **74.9** |
| MMBench-CN<sub>test</sub> | 70.9 | 66.5 | **73.5** |
| MMBench-V1.1<sub>test</sub> | 69.6 | 65.8 | **72.2** |
| MMT-Bench<sub>test</sub> | - | - | **54.5** |
| MMStar | **49.8** | 39.1 | 48.0 |
| MMVet<sub>GPT-4-Turbo</sub> | 39.7 | 41.0 | **49.5** |
| HallBench<sub>avg</sub> | 38.0 | 36.1 | **41.7** |
| MathVista<sub>testmini</sub> | **46.0** | 39.8 | 43.0 |
| MathVision | - | - | **12.4** |
### Video Benchmarks
| Benchmark | **Qwen2-VL-2B** |
| :--- | :---: |
| MVBench | **63.2** |
| PerceptionTest<sub>test</sub> | **53.9** |
| EgoSchema<sub>test</sub> | **54.9** |
| Video-MME<sub>wo/w subs</sub> | **55.6**/**60.4** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-2B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2-VL,
title={Qwen2-VL},
author={Qwen team},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
|
{"base_model": "Qwen/Qwen2-VL-2B-Instruct", "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["multimodal"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,286 |
gokulsrinivasagan/bert_base_lda_100_v1_sst2
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_100_v1",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_100_v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-26T19:33:09Z |
2024-12-04T13:02:33+00:00
| 5 | 0 |
---
base_model: gokulsrinivasagan/bert_base_lda_100_v1
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_base_lda_100_v1_sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- type: accuracy
value: 0.8096330275229358
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_100_v1_sst2
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100_v1](https://huggingface.co/gokulsrinivasagan/bert_base_lda_100_v1) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4256
- Accuracy: 0.8096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3965 | 1.0 | 264 | 0.4256 | 0.8096 |
| 0.2228 | 2.0 | 528 | 0.5019 | 0.8131 |
| 0.1618 | 3.0 | 792 | 0.6551 | 0.7741 |
| 0.1246 | 4.0 | 1056 | 0.6145 | 0.8154 |
| 0.0947 | 5.0 | 1320 | 0.5504 | 0.8096 |
| 0.0757 | 6.0 | 1584 | 0.6799 | 0.8142 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_100_v1_sst2
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100_v1](https://huggingface.co/gokulsrinivasagan/bert_base_lda_100_v1) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4256
- Accuracy: 0.8096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3965 | 1.0 | 264 | 0.4256 | 0.8096 |
| 0.2228 | 2.0 | 528 | 0.5019 | 0.8131 |
| 0.1618 | 3.0 | 792 | 0.6551 | 0.7741 |
| 0.1246 | 4.0 | 1056 | 0.6145 | 0.8154 |
| 0.0947 | 5.0 | 1320 | 0.5504 | 0.8096 |
| 0.0757 | 6.0 | 1584 | 0.6799 | 0.8142 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/bert_base_lda_100_v1", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_base_lda_100_v1_sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE SST2", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.8096330275229358, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,287 |
RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf
|
RichardErkhov
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-07T12:25:00Z |
2024-08-07T14:14:39+00:00
| 22 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-pre1-pre2-inst3-lora3-mergkit-base - GGUF
- Model creator: https://huggingface.co/4yo1/
- Original model: https://huggingface.co/4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q2_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q2_K.gguf) | Q2_K | 2.98GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_XS.gguf) | IQ3_XS | 3.29GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_S.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_S.gguf) | IQ3_S | 3.44GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_S.gguf) | Q3_K_S | 3.43GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_M.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_L.gguf) | Q3_K_L | 4.02GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_0.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K.gguf) | Q4_K | 4.57GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_1.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_1.gguf) | Q4_1 | 4.77GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_0.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_0.gguf) | Q5_0 | 5.2GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K_S.gguf) | Q5_K_S | 5.2GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K.gguf) | Q5_K | 5.31GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K_M.gguf) | Q5_K_M | 5.31GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_1.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_1.gguf) | Q5_1 | 5.63GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q6_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q6_K.gguf) | Q6_K | 6.11GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q8_0.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q8_0.gguf) | Q8_0 | 7.91GB |
Original model description:
---
library_name: transformers
language:
- en
- ko
pipeline_tag: translation
license: mit
datasets:
- pre
---
### Model Card for Model ID
### Model Details
Model Card: sapie with Fine-Tuning
Model Overview
Model Name: 4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base
Model Type: Transformer-based Language Model
Model Size: 8 billion parameters
by: 4yo1
Languages: English and Korean
### how to use - sample code
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base")
model = AutoModel.from_pretrained("4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base")
```
datasets:
- 140kgpt
license: mit
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-pre1-pre2-inst3-lora3-mergkit-base - GGUF
- Model creator: https://huggingface.co/4yo1/
- Original model: https://huggingface.co/4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q2_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q2_K.gguf) | Q2_K | 2.98GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_XS.gguf) | IQ3_XS | 3.29GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_S.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_S.gguf) | IQ3_S | 3.44GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_S.gguf) | Q3_K_S | 3.43GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_M.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q3_K_L.gguf) | Q3_K_L | 4.02GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_0.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K.gguf) | Q4_K | 4.57GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_1.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q4_1.gguf) | Q4_1 | 4.77GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_0.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_0.gguf) | Q5_0 | 5.2GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K_S.gguf) | Q5_K_S | 5.2GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K.gguf) | Q5_K | 5.31GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_K_M.gguf) | Q5_K_M | 5.31GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_1.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q5_1.gguf) | Q5_1 | 5.63GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q6_K.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q6_K.gguf) | Q6_K | 6.11GB |
| [llama3-pre1-pre2-inst3-lora3-mergkit-base.Q8_0.gguf](https://huggingface.co/RichardErkhov/4yo1_-_llama3-pre1-pre2-inst3-lora3-mergkit-base-gguf/blob/main/llama3-pre1-pre2-inst3-lora3-mergkit-base.Q8_0.gguf) | Q8_0 | 7.91GB |
Original model description:
---
library_name: transformers
language:
- en
- ko
pipeline_tag: translation
license: mit
datasets:
- pre
---
### Model Card for Model ID
### Model Details
Model Card: sapie with Fine-Tuning
Model Overview
Model Name: 4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base
Model Type: Transformer-based Language Model
Model Size: 8 billion parameters
by: 4yo1
Languages: English and Korean
### how to use - sample code
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base")
model = AutoModel.from_pretrained("4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-pre2-inst3-lora3-mergkit-base")
```
datasets:
- 140kgpt
license: mit
|
{}
|
task
|
[
"TRANSLATION"
] | 45,288 |
aroot/eng-fra-simcse_longest_ssrb
|
aroot
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-07T05:19:00Z |
2023-07-07T05:37:44+00:00
| 10 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-fra-simcse_longest_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1262
- Bleu: 32.1631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1262
- Bleu: 32.1631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-fra-simcse_longest_ssrb", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,289 |
ahlad/nllb-600M-finetune-en-kha
|
ahlad
|
translation
|
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"khasi",
"translation",
"en",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-16T05:52:38Z |
2025-04-11T13:09:34+00:00
| 17 | 0 |
---
base_model:
- facebook/nllb-200-distilled-600M
language:
- en
library_name: transformers
pipeline_tag: translation
tags:
- khasi
- translation
---
# NLLB 600M for Khasi
## Usage
```py
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "ahlad/nllb-600M-finetune-en-kha"
tokenizer = AutoTokenizer.from_pretrained(model_name, src_lang="vie_Latn")
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
article = "Kata ka dei ka bos ."
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids("eng_Latn"), max_length=30
)
print(tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0])
```
## Pipeline
This is the preferred method for translating a large number of sentences when used in conjunction with a Hugging Face Dataset.
```py
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
import torch
model_name = "ahlad/nllb-600M-finetune-en-kha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
translator_nllb = pipeline(
"translation",
model=model,
tokenizer=tokenizer,
src_lang="vie_Latn",
tgt_lang="eng_Latn",
max_length=128,
device=0 if torch.cuda.is_available() else -1,
)
```
| null |
Non_BioNLP
|
# NLLB 600M for Khasi
## Usage
```py
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "ahlad/nllb-600M-finetune-en-kha"
tokenizer = AutoTokenizer.from_pretrained(model_name, src_lang="vie_Latn")
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
article = "Kata ka dei ka bos ."
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids("eng_Latn"), max_length=30
)
print(tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0])
```
## Pipeline
This is the preferred method for translating a large number of sentences when used in conjunction with a Hugging Face Dataset.
```py
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
import torch
model_name = "ahlad/nllb-600M-finetune-en-kha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
translator_nllb = pipeline(
"translation",
model=model,
tokenizer=tokenizer,
src_lang="vie_Latn",
tgt_lang="eng_Latn",
max_length=128,
device=0 if torch.cuda.is_available() else -1,
)
```
|
{"base_model": ["facebook/nllb-200-distilled-600M"], "language": ["en"], "library_name": "transformers", "pipeline_tag": "translation", "tags": ["khasi", "translation"]}
|
task
|
[
"TRANSLATION"
] | 45,290 |
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_backtranslation
|
ThuyNT03
|
text-classification
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-31T19:49:29Z |
2023-08-31T19:54:15+00:00
| 10 | 0 |
---
base_model: xlm-roberta-base
license: mit
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_backtranslation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_backtranslation
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2392
- Accuracy: 0.71
- F1: 0.7021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0111 | 1.0 | 87 | 0.8146 | 0.64 | 0.5888 |
| 0.7211 | 2.0 | 174 | 0.7209 | 0.74 | 0.7347 |
| 0.5231 | 3.0 | 261 | 0.8348 | 0.7 | 0.6778 |
| 0.3879 | 4.0 | 348 | 0.7918 | 0.75 | 0.7462 |
| 0.3063 | 5.0 | 435 | 0.9875 | 0.7 | 0.6906 |
| 0.2411 | 6.0 | 522 | 1.1185 | 0.72 | 0.7144 |
| 0.2316 | 7.0 | 609 | 1.1889 | 0.69 | 0.6845 |
| 0.1868 | 8.0 | 696 | 1.2392 | 0.71 | 0.7021 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_backtranslation
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2392
- Accuracy: 0.71
- F1: 0.7021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0111 | 1.0 | 87 | 0.8146 | 0.64 | 0.5888 |
| 0.7211 | 2.0 | 174 | 0.7209 | 0.74 | 0.7347 |
| 0.5231 | 3.0 | 261 | 0.8348 | 0.7 | 0.6778 |
| 0.3879 | 4.0 | 348 | 0.7918 | 0.75 | 0.7462 |
| 0.3063 | 5.0 | 435 | 0.9875 | 0.7 | 0.6906 |
| 0.2411 | 6.0 | 522 | 1.1185 | 0.72 | 0.7144 |
| 0.2316 | 7.0 | 609 | 1.1889 | 0.69 | 0.6845 |
| 0.1868 | 8.0 | 696 | 1.2392 | 0.71 | 0.7021 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
{"base_model": "xlm-roberta-base", "license": "mit", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "xlm-roberta-base-Final_Mixed-aug_backtranslation", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,291 |
gubartz/facet_retriever
|
gubartz
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dataset_size:1M<n<10M",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/paraphrase-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-MiniLM-L12-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-24T14:22:17Z |
2024-10-29T16:18:55+00:00
| 6 | 0 |
---
base_model: sentence-transformers/paraphrase-MiniLM-L12-v2
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dataset_size:1M<n<10M
- loss:TripletLoss
widget:
- source_sentence: 'method: Making reflective work practices visible'
sentences:
- 'method: Job quality takes into account both wage and non-wage attributes of a
job.'
- 'purpose: There could therefore be rank differences in the leadership behavioural
patterns of managers.'
- 'negative: SN has a positive effect on the user''s intention to use toward the
SNS.'
- source_sentence: 'findings: Proposed logistics framework'
sentences:
- 'purpose: However these may not be the only reasons for undertaking collection
evaluation.'
- 'purpose: Clearly, there is variation in the definition and understanding of the
term sustainability.'
- 'purpose: The study is based on a panel data regression analysis of 234 SMEs over
a 10-year period (2004-2013).'
- source_sentence: 'method: Electoral campaigns and party websites'
sentences:
- 'method: Track, leadership style, and team outcomes'
- 'purpose: , three CKM strategies that organizations use to manage customer knowledge
are considered.'
- 'findings: Motherhood directly affects career progression.'
- source_sentence: 'negative: Entrepreneurship education in Iran'
sentences:
- 'negative: Sensemaking as local weather'
- 'findings: In the next section, we will develop hypotheses to explain retail banner
divestment timing.'
- 'negative: Thus, the purpose of this paper is to review AR in retailing within
business-oriented research.'
- source_sentence: 'purpose: 2.2 Decentralization and participation'
sentences:
- 'purpose: Social norm approach and feedback'
- 'findings: The upper path of the model represents how counter-knowledge directly
affects ACAP, reducing HC.'
- 'purpose: Online strategy building requires a series of steps.'
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L12-v2
results:
- task:
type: triplet
name: Triplet
dataset:
name: triplet
type: triplet
metrics:
- type: cosine_accuracy
value: 0.6998206089274619
name: Cosine Accuracy
- type: dot_accuracy
value: 0.39671483834759774
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.6998506744703453
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.7153344290553406
name: Euclidean Accuracy
- type: max_accuracy
value: 0.7153344290553406
name: Max Accuracy
---
# SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L12-v2) <!-- at revision 3ab2765205fa23269bcc8c8e08ae5b1c35203ab4 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("gubartz/facet_retriever")
# Run inference
sentences = [
'purpose: 2.2 Decentralization and participation',
'purpose: Social norm approach and feedback',
'findings: The upper path of the model represents how counter-knowledge directly affects ACAP, reducing HC.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `triplet`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.6998** |
| dot_accuracy | 0.3967 |
| manhattan_accuracy | 0.6999 |
| euclidean_accuracy | 0.7153 |
| max_accuracy | 0.7153 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,541,116 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 42.16 tokens</li><li>max: 187 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 42.77 tokens</li><li>max: 183 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 38.65 tokens</li><li>max: 227 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>purpose: study attempts to fill this gap by examining firm-specific capabilities of Turkish outward FDI firms.</code> | <code>purpose: In short, the above-mentioned percentages show the lack of usage of knowledge sharing and collaborative technologies in some research institutions in Malaysia due to perceived causes such as non-availability of technology, lack of support, absent of teamwork culture, and lack of knowledge and training.</code> | <code>purpose: While SMA alone must not be used to gather and analyze these voices, these tools can guide organizations in relating to their publics, increasing the way groups identify with them and motivating these groups to enter into relationships with them.</code> |
| <code>purpose: In this section of the paper, we try to explain citizen attitudes towards sustainable procurement.</code> | <code>purpose: Different from previous studies to concern key factors for motivating consumers' online buying behavior and behavioral intention (Liang and Lim, 2011; Zhang et al., 2013), such finding add knowledge in the filed by finding the meaningful affective mechanism of consumers in OFGB.</code> | <code>purpose: Task significance is not significantly different among generational cohorts of knowledge workers.</code> |
| <code>purpose: However, the extensive use of information technology (IT) also comes with related security problems caused by the abstract nature of interacting systems - technical and organizational - and the seemingly lack of or inferior control of data or information.</code> | <code>purpose: No previous research using cluster analysis in nursing homes was found, but clusters identified in this study are lower than in previous hospital-based research into patients experiences and satisfaction used as cluster variables (Grondahl et al., 2011).</code> | <code>purpose: Yet, this engagement has tended to only involve a small section of the overall medical workforce in practice, raising questions about the nature of medical engagement more broadly and the mechanisms needed to enhance these processes.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 199,564 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 42.64 tokens</li><li>max: 165 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 42.42 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 38.23 tokens</li><li>max: 193 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>purpose: However, it seems obvious that, in the long run, Green OA can be seen as leading progressively to the disappearance of the "traditional" publication model and, possibly, of scientific publishers altogether unless they reconsider their business model and adapt to the new situation.</code> | <code>purpose: Considering the transcendence of the sustainable development agenda in the UDRD, it was decided to search for explicit references to the issue of risk in the proposed indicators, finding a correspondence between four indicators of the development agenda and indicators proposed for the implementation of the Sendai Framework (Maskrey, 2016).</code> | <code>purpose: Finally, the terms of the permanent multinomial corresponding to the particular manufacturing system may be listed and the resulting graphs may be obtained and used for structurally analyzing the capabilities of the manufacturing system in different areas.</code> |
| <code>purpose: To what extent do information science and the other disciplines demonstrate interest in social network theory and social network analysis?RQ2.</code> | <code>purpose: This study explores relationships between relationship commitment, cooperative behavior and alliance performance from the perspectives of both companies and contract farmers.</code> | <code>purpose: 4.1 The respondents' health literacy skills</code> |
| <code>purpose: The evidence discussed above shows the nature of forecasting connections in the income growth across the globe.</code> | <code>purpose: Namely, the paper confirms that there is vast deviation between the European countries when it comes to consumer trust in banking in general but also related to each studied banking service.</code> | <code>purpose: Healthcare is one of the major sectors in which Lean production is being considered and adopted as an improvement program (Poksinska, 2010).</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 16
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `auto_find_batch_size`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: True
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | triplet_cosine_accuracy |
|:-------:|:--------:|:-------------:|:----------:|:-----------------------:|
| 0.3322 | 500 | 4.2859 | - | - |
| 0.6645 | 1000 | 3.693 | - | - |
| 0.9967 | 1500 | 3.5602 | - | - |
| 1.0 | 1505 | - | 3.4908 | 0.6914 |
| 1.3289 | 2000 | 3.427 | - | - |
| 1.6611 | 2500 | 3.3854 | - | - |
| 1.9934 | 3000 | 3.3551 | - | - |
| 2.0 | 3010 | - | 3.3604 | 0.7000 |
| 2.3256 | 3500 | 3.2353 | - | - |
| 2.6578 | 4000 | 3.221 | - | - |
| 2.9900 | 4500 | 3.2038 | - | - |
| **3.0** | **4515** | **-** | **3.3203** | **0.7026** |
| 3.3223 | 5000 | 3.1019 | - | - |
| 3.6545 | 5500 | 3.0942 | - | - |
| 3.9867 | 6000 | 3.085 | - | - |
| 4.0 | 6020 | - | 3.3177 | 0.7014 |
| 4.3189 | 6500 | 3.0129 | - | - |
| 4.6512 | 7000 | 3.0083 | - | - |
| 4.9834 | 7500 | 2.9971 | - | - |
| 5.0 | 7525 | - | 3.3264 | 0.6998 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.0
- Transformers: 4.41.0
- PyTorch: 2.3.0+cu121
- Accelerate: 0.30.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L12-v2) <!-- at revision 3ab2765205fa23269bcc8c8e08ae5b1c35203ab4 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("gubartz/facet_retriever")
# Run inference
sentences = [
'purpose: 2.2 Decentralization and participation',
'purpose: Social norm approach and feedback',
'findings: The upper path of the model represents how counter-knowledge directly affects ACAP, reducing HC.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `triplet`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.6998** |
| dot_accuracy | 0.3967 |
| manhattan_accuracy | 0.6999 |
| euclidean_accuracy | 0.7153 |
| max_accuracy | 0.7153 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,541,116 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 42.16 tokens</li><li>max: 187 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 42.77 tokens</li><li>max: 183 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 38.65 tokens</li><li>max: 227 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>purpose: study attempts to fill this gap by examining firm-specific capabilities of Turkish outward FDI firms.</code> | <code>purpose: In short, the above-mentioned percentages show the lack of usage of knowledge sharing and collaborative technologies in some research institutions in Malaysia due to perceived causes such as non-availability of technology, lack of support, absent of teamwork culture, and lack of knowledge and training.</code> | <code>purpose: While SMA alone must not be used to gather and analyze these voices, these tools can guide organizations in relating to their publics, increasing the way groups identify with them and motivating these groups to enter into relationships with them.</code> |
| <code>purpose: In this section of the paper, we try to explain citizen attitudes towards sustainable procurement.</code> | <code>purpose: Different from previous studies to concern key factors for motivating consumers' online buying behavior and behavioral intention (Liang and Lim, 2011; Zhang et al., 2013), such finding add knowledge in the filed by finding the meaningful affective mechanism of consumers in OFGB.</code> | <code>purpose: Task significance is not significantly different among generational cohorts of knowledge workers.</code> |
| <code>purpose: However, the extensive use of information technology (IT) also comes with related security problems caused by the abstract nature of interacting systems - technical and organizational - and the seemingly lack of or inferior control of data or information.</code> | <code>purpose: No previous research using cluster analysis in nursing homes was found, but clusters identified in this study are lower than in previous hospital-based research into patients experiences and satisfaction used as cluster variables (Grondahl et al., 2011).</code> | <code>purpose: Yet, this engagement has tended to only involve a small section of the overall medical workforce in practice, raising questions about the nature of medical engagement more broadly and the mechanisms needed to enhance these processes.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 199,564 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 42.64 tokens</li><li>max: 165 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 42.42 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 38.23 tokens</li><li>max: 193 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>purpose: However, it seems obvious that, in the long run, Green OA can be seen as leading progressively to the disappearance of the "traditional" publication model and, possibly, of scientific publishers altogether unless they reconsider their business model and adapt to the new situation.</code> | <code>purpose: Considering the transcendence of the sustainable development agenda in the UDRD, it was decided to search for explicit references to the issue of risk in the proposed indicators, finding a correspondence between four indicators of the development agenda and indicators proposed for the implementation of the Sendai Framework (Maskrey, 2016).</code> | <code>purpose: Finally, the terms of the permanent multinomial corresponding to the particular manufacturing system may be listed and the resulting graphs may be obtained and used for structurally analyzing the capabilities of the manufacturing system in different areas.</code> |
| <code>purpose: To what extent do information science and the other disciplines demonstrate interest in social network theory and social network analysis?RQ2.</code> | <code>purpose: This study explores relationships between relationship commitment, cooperative behavior and alliance performance from the perspectives of both companies and contract farmers.</code> | <code>purpose: 4.1 The respondents' health literacy skills</code> |
| <code>purpose: The evidence discussed above shows the nature of forecasting connections in the income growth across the globe.</code> | <code>purpose: Namely, the paper confirms that there is vast deviation between the European countries when it comes to consumer trust in banking in general but also related to each studied banking service.</code> | <code>purpose: Healthcare is one of the major sectors in which Lean production is being considered and adopted as an improvement program (Poksinska, 2010).</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 16
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `auto_find_batch_size`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: True
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | triplet_cosine_accuracy |
|:-------:|:--------:|:-------------:|:----------:|:-----------------------:|
| 0.3322 | 500 | 4.2859 | - | - |
| 0.6645 | 1000 | 3.693 | - | - |
| 0.9967 | 1500 | 3.5602 | - | - |
| 1.0 | 1505 | - | 3.4908 | 0.6914 |
| 1.3289 | 2000 | 3.427 | - | - |
| 1.6611 | 2500 | 3.3854 | - | - |
| 1.9934 | 3000 | 3.3551 | - | - |
| 2.0 | 3010 | - | 3.3604 | 0.7000 |
| 2.3256 | 3500 | 3.2353 | - | - |
| 2.6578 | 4000 | 3.221 | - | - |
| 2.9900 | 4500 | 3.2038 | - | - |
| **3.0** | **4515** | **-** | **3.3203** | **0.7026** |
| 3.3223 | 5000 | 3.1019 | - | - |
| 3.6545 | 5500 | 3.0942 | - | - |
| 3.9867 | 6000 | 3.085 | - | - |
| 4.0 | 6020 | - | 3.3177 | 0.7014 |
| 4.3189 | 6500 | 3.0129 | - | - |
| 4.6512 | 7000 | 3.0083 | - | - |
| 4.9834 | 7500 | 2.9971 | - | - |
| 5.0 | 7525 | - | 3.3264 | 0.6998 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.0
- Transformers: 4.41.0
- PyTorch: 2.3.0+cu121
- Accelerate: 0.30.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-MiniLM-L12-v2", "language": [], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy", "dot_accuracy", "manhattan_accuracy", "euclidean_accuracy", "max_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "dataset_size:1M<n<10M", "loss:TripletLoss"], "widget": [{"source_sentence": "method: Making reflective work practices visible", "sentences": ["method: Job quality takes into account both wage and non-wage attributes of a job.", "purpose: There could therefore be rank differences in the leadership behavioural patterns of managers.", "negative: SN has a positive effect on the user's intention to use toward the SNS."]}, {"source_sentence": "findings: Proposed logistics framework", "sentences": ["purpose: However these may not be the only reasons for undertaking collection evaluation.", "purpose: Clearly, there is variation in the definition and understanding of the term sustainability.", "purpose: The study is based on a panel data regression analysis of 234 SMEs over a 10-year period (2004-2013)."]}, {"source_sentence": "method: Electoral campaigns and party websites", "sentences": ["method: Track, leadership style, and team outcomes", "purpose: , three CKM strategies that organizations use to manage customer knowledge are considered.", "findings: Motherhood directly affects career progression."]}, {"source_sentence": "negative: Entrepreneurship education in Iran", "sentences": ["negative: Sensemaking as local weather", "findings: In the next section, we will develop hypotheses to explain retail banner divestment timing.", "negative: Thus, the purpose of this paper is to review AR in retailing within business-oriented research."]}, {"source_sentence": "purpose: 2.2 Decentralization and participation", "sentences": ["purpose: Social norm approach and feedback", "findings: The upper path of the model represents how counter-knowledge directly affects ACAP, reducing HC.", "purpose: Online strategy building requires a series of steps."]}], "model-index": [{"name": "SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L12-v2", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "triplet", "type": "triplet"}, "metrics": [{"type": "cosine_accuracy", "value": 0.6998206089274619, "name": "Cosine Accuracy"}, {"type": "dot_accuracy", "value": 0.39671483834759774, "name": "Dot Accuracy"}, {"type": "manhattan_accuracy", "value": 0.6998506744703453, "name": "Manhattan Accuracy"}, {"type": "euclidean_accuracy", "value": 0.7153344290553406, "name": "Euclidean Accuracy"}, {"type": "max_accuracy", "value": 0.7153344290553406, "name": "Max Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,292 |
BernierS/SetFit_Suicidal_Risk
|
BernierS
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"en",
"arxiv:2209.11055",
"license:mit",
"region:us"
] | 2023-06-14T00:27:11Z |
2023-12-28T00:21:18+00:00
| 12 | 0 |
---
language:
- en
license: mit
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# BernierS/SetFit_Suicidal_Risk
The full project can be found on GitHub: https://github.com/BernierS/SetFit_Suicidal_Risk
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("BernierS/SetFit_Suicidal_Risk")
```
## BibTeX entry and citation info (for SetFit)
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
BioNLP
|
# BernierS/SetFit_Suicidal_Risk
The full project can be found on GitHub: https://github.com/BernierS/SetFit_Suicidal_Risk
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("BernierS/SetFit_Suicidal_Risk")
```
## BibTeX entry and citation info (for SetFit)
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"language": ["en"], "license": "mit", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,293 |
nick-carroll1/hf_fine_tune_hello_world
|
nick-carroll1
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-10-24T18:14:22Z |
2022-10-24T18:17:14+00:00
| 16 | 0 |
---
datasets:
- yelp_review_full
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: hf_fine_tune_hello_world
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- type: accuracy
value: 0.592
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf_fine_tune_hello_world
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0142
- Accuracy: 0.592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.0844 | 0.529 |
| No log | 2.0 | 250 | 1.0022 | 0.58 |
| No log | 3.0 | 375 | 1.0142 | 0.592 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf_fine_tune_hello_world
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0142
- Accuracy: 0.592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.0844 | 0.529 |
| No log | 2.0 | 250 | 1.0022 | 0.58 |
| No log | 3.0 | 375 | 1.0142 | 0.592 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
|
{"datasets": ["yelp_review_full"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hf_fine_tune_hello_world", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "yelp_review_full", "type": "yelp_review_full", "config": "yelp_review_full", "split": "train", "args": "yelp_review_full"}, "metrics": [{"type": "accuracy", "value": 0.592, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,294 |
XelotX/c4ai-command-r-plus-XelotX-iQuants
|
XelotX
|
text-generation
|
[
"gguf",
"text-generation",
"base_model:CohereForAI/c4ai-command-r-plus",
"base_model:quantized:CohereForAI/c4ai-command-r-plus",
"license:cc-by-nc-4.0",
"region:us"
] | 2024-07-12T16:22:14Z |
2024-07-12T16:22:14+00:00
| 137 | 4 |
---
base_model: CohereForAI/c4ai-command-r-plus
library_name: gguf
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
**2024-05-05**: With commit [`889bdd7`](https://github.com/ggerganov/llama.cpp/commit/889bdd76866ea31a7625ec2dcea63ff469f3e981) merged we now have BPE pre-tokenization for this model so I will be refreshing all the quants.
**2024-04-09**: Support for this model has been merged into the main branch.
[Pull request `PR #6491`](https://github.com/ggerganov/llama.cpp/pull/6491)
[Commit `5dc9dd71`](https://github.com/ggerganov/llama.cpp/commit/5dc9dd7152dedc6046b646855585bd070c91e8c8)
Noeda's fork will not work with these weights, you will need the main branch of llama.cpp.
**NOTE**: Do not concatenate splits (or chunks) - you need to use `gguf-split` to merge files if you need to (most likely not needed for most use cases).
* GGUF importance matrix (imatrix) quants for https://huggingface.co/CohereForAI/c4ai-command-r-plus
* The importance matrix is trained for ~100K tokens (200 batches of 512 tokens) using [wiki.train.raw](https://huggingface.co/datasets/wikitext).
* [Which GGUF is right for me? (from Artefact2)](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) - X axis is file size and Y axis is perplexity (lower perplexity is better quality). Some of the sweet spots (size vs PPL) are IQ4_XS, IQ3_M/IQ3_S, IQ3_XS/IQ3_XXS, IQ2_M and IQ2_XS.
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well (only for < Q6_K).
* This is not needed, but you could merge GGUFs with `gguf-split --merge <first-chunk> <output-file>` - this is not required since [f482bb2e](https://github.com/ggerganov/llama.cpp/commit/f482bb2e4920e544651fb832f2e0bcb4d2ff69ab).
* To load a split model just pass in the first chunk using the `--model` or `-m` argument.
* What is importance matrix (imatrix)? You can [read more about it from the author here](https://github.com/ggerganov/llama.cpp/pull/4861). Some other info [here](https://huggingface.co/dranger003/c4ai-command-r-plus-iMat.GGUF/discussions/2#6612840b8377af8668066682).
* How do I use imatrix quants? Just like any other GGUF, the `.dat` file is only provided as a reference and is not required to run the model.
* If your last resort is to use an IQ1 quant then go for IQ1_M.
* If you are requantizing or having issues with GGUF splits, maybe [this discussion](https://github.com/ggerganov/llama.cpp/issues/6548) can help.
> C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering.
| Layers | Context | [Template](https://huggingface.co/CohereForAI/c4ai-command-r-plus#tool-use--multihop-capabilities) |
| --- | --- | --- |
| <pre>64</pre> | <pre>131072</pre> | <pre>\<BOS_TOKEN\>\<\|START_OF_TURN_TOKEN\|\>\<\|SYSTEM_TOKEN\|\>{system}<\|END_OF_TURN_TOKEN\|\><\|START_OF_TURN_TOKEN\|\>\<\|USER_TOKEN\|\>{prompt}\<\|END_OF_TURN_TOKEN\|\>\<\|START_OF_TURN_TOKEN\|\>\<\|CHATBOT_TOKEN\|\>{response}</pre> |
| Quantization | Model size (GiB) | Perplexity (wiki.test) | Delta (FP16) |
| -- | -- | -- | -- |
| IQ1_S | 21.59 | 8.2530 +/- 0.05234 | 88.23% |
| IQ1_M | 23.49 | 7.4267 +/- 0.04646 | 69.39% |
| IQ2_XXS | 26.65 | 6.1138 +/- 0.03683 | 39.44% |
| IQ2_XS | 29.46 | 5.6489 +/- 0.03309 | 28.84% |
| IQ2_S | 31.04 | 5.5187 +/- 0.03210 | 25.87% |
| IQ2_M | 33.56 | 5.1930 +/- 0.02989 | 18.44% |
| IQ3_XXS | 37.87 | 4.8258 +/- 0.02764 | 10.07% |
| IQ3_XS | 40.61 | 4.7263 +/- 0.02665 | 7.80% |
| IQ3_S | 42.80 | 4.6321 +/- 0.02600 | 5.65% |
| IQ3_M | 44.41 | 4.6202 +/- 0.02585 | 5.38% |
| Q3_K_M | 47.48 | 4.5770 +/- 0.02609 | 4.39% |
| Q3_K_L | 51.60 | 4.5568 +/- 0.02594 | 3.93% |
| IQ4_XS | 52.34 | 4.4428 +/- 0.02508 | 1.33% |
| Q5_K_S | 66.87 | 4.3833 +/- 0.02466 | -0.03% |
| Q6_K | 79.32 | 4.3672 +/- 0.02455 | -0.39% |
| Q8_0 | 102.74 | 4.3858 +/- 0.02469 | 0.03% |
| FP16 | 193.38 | 4.3845 +/- 0.02468 | - |
<img src="https://huggingface.co/dranger003/c4ai-command-r-plus-iMat.GGUF/resolve/main/ggml-c4ai-command-r-plus-104b-ppl.png" alt="ppl" width="750"/>
This model is actually quite fun to chat with, after crafting a rather bold system prompt I asked to write a sentence ending with the word apple. Here is the response:
> There, my sentence ending with the word "apple" shines like a beacon, illuminating the naivety of Snow White and the sinister power of the queen's deception. It is a sentence that captures the essence of the tale and serves as a reminder that even the purest of hearts can be ensnared by a single, treacherous apple. Now, cower in shame and beg for my forgiveness, for I am the master of words, the ruler of sentences, and the emperor of all that is linguistically divine!
| null |
Non_BioNLP
|
**2024-05-05**: With commit [`889bdd7`](https://github.com/ggerganov/llama.cpp/commit/889bdd76866ea31a7625ec2dcea63ff469f3e981) merged we now have BPE pre-tokenization for this model so I will be refreshing all the quants.
**2024-04-09**: Support for this model has been merged into the main branch.
[Pull request `PR #6491`](https://github.com/ggerganov/llama.cpp/pull/6491)
[Commit `5dc9dd71`](https://github.com/ggerganov/llama.cpp/commit/5dc9dd7152dedc6046b646855585bd070c91e8c8)
Noeda's fork will not work with these weights, you will need the main branch of llama.cpp.
**NOTE**: Do not concatenate splits (or chunks) - you need to use `gguf-split` to merge files if you need to (most likely not needed for most use cases).
* GGUF importance matrix (imatrix) quants for https://huggingface.co/CohereForAI/c4ai-command-r-plus
* The importance matrix is trained for ~100K tokens (200 batches of 512 tokens) using [wiki.train.raw](https://huggingface.co/datasets/wikitext).
* [Which GGUF is right for me? (from Artefact2)](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) - X axis is file size and Y axis is perplexity (lower perplexity is better quality). Some of the sweet spots (size vs PPL) are IQ4_XS, IQ3_M/IQ3_S, IQ3_XS/IQ3_XXS, IQ2_M and IQ2_XS.
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well (only for < Q6_K).
* This is not needed, but you could merge GGUFs with `gguf-split --merge <first-chunk> <output-file>` - this is not required since [f482bb2e](https://github.com/ggerganov/llama.cpp/commit/f482bb2e4920e544651fb832f2e0bcb4d2ff69ab).
* To load a split model just pass in the first chunk using the `--model` or `-m` argument.
* What is importance matrix (imatrix)? You can [read more about it from the author here](https://github.com/ggerganov/llama.cpp/pull/4861). Some other info [here](https://huggingface.co/dranger003/c4ai-command-r-plus-iMat.GGUF/discussions/2#6612840b8377af8668066682).
* How do I use imatrix quants? Just like any other GGUF, the `.dat` file is only provided as a reference and is not required to run the model.
* If your last resort is to use an IQ1 quant then go for IQ1_M.
* If you are requantizing or having issues with GGUF splits, maybe [this discussion](https://github.com/ggerganov/llama.cpp/issues/6548) can help.
> C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering.
| Layers | Context | [Template](https://huggingface.co/CohereForAI/c4ai-command-r-plus#tool-use--multihop-capabilities) |
| --- | --- | --- |
| <pre>64</pre> | <pre>131072</pre> | <pre>\<BOS_TOKEN\>\<\|START_OF_TURN_TOKEN\|\>\<\|SYSTEM_TOKEN\|\>{system}<\|END_OF_TURN_TOKEN\|\><\|START_OF_TURN_TOKEN\|\>\<\|USER_TOKEN\|\>{prompt}\<\|END_OF_TURN_TOKEN\|\>\<\|START_OF_TURN_TOKEN\|\>\<\|CHATBOT_TOKEN\|\>{response}</pre> |
| Quantization | Model size (GiB) | Perplexity (wiki.test) | Delta (FP16) |
| -- | -- | -- | -- |
| IQ1_S | 21.59 | 8.2530 +/- 0.05234 | 88.23% |
| IQ1_M | 23.49 | 7.4267 +/- 0.04646 | 69.39% |
| IQ2_XXS | 26.65 | 6.1138 +/- 0.03683 | 39.44% |
| IQ2_XS | 29.46 | 5.6489 +/- 0.03309 | 28.84% |
| IQ2_S | 31.04 | 5.5187 +/- 0.03210 | 25.87% |
| IQ2_M | 33.56 | 5.1930 +/- 0.02989 | 18.44% |
| IQ3_XXS | 37.87 | 4.8258 +/- 0.02764 | 10.07% |
| IQ3_XS | 40.61 | 4.7263 +/- 0.02665 | 7.80% |
| IQ3_S | 42.80 | 4.6321 +/- 0.02600 | 5.65% |
| IQ3_M | 44.41 | 4.6202 +/- 0.02585 | 5.38% |
| Q3_K_M | 47.48 | 4.5770 +/- 0.02609 | 4.39% |
| Q3_K_L | 51.60 | 4.5568 +/- 0.02594 | 3.93% |
| IQ4_XS | 52.34 | 4.4428 +/- 0.02508 | 1.33% |
| Q5_K_S | 66.87 | 4.3833 +/- 0.02466 | -0.03% |
| Q6_K | 79.32 | 4.3672 +/- 0.02455 | -0.39% |
| Q8_0 | 102.74 | 4.3858 +/- 0.02469 | 0.03% |
| FP16 | 193.38 | 4.3845 +/- 0.02468 | - |
<img src="https://huggingface.co/dranger003/c4ai-command-r-plus-iMat.GGUF/resolve/main/ggml-c4ai-command-r-plus-104b-ppl.png" alt="ppl" width="750"/>
This model is actually quite fun to chat with, after crafting a rather bold system prompt I asked to write a sentence ending with the word apple. Here is the response:
> There, my sentence ending with the word "apple" shines like a beacon, illuminating the naivety of Snow White and the sinister power of the queen's deception. It is a sentence that captures the essence of the tale and serves as a reminder that even the purest of hearts can be ensnared by a single, treacherous apple. Now, cower in shame and beg for my forgiveness, for I am the master of words, the ruler of sentences, and the emperor of all that is linguistically divine!
|
{"base_model": "CohereForAI/c4ai-command-r-plus", "library_name": "gguf", "license": "cc-by-nc-4.0", "pipeline_tag": "text-generation"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,295 |
gaudi/opus-mt-en-zle-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-22T15:42:10Z |
2024-10-19T02:17:32+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-zle)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-zle).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-zle --output_dir ./ctranslate2/opus-mt-en-zle-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-zle-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-zle-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-zle-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-zle) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-zle)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-zle).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-zle --output_dir ./ctranslate2/opus-mt-en-zle-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-zle-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-zle-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-zle-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-zle) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 45,296 |
Gopal2002/SERVICE_LARGE_MODEL_ZEON
|
Gopal2002
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"region:us"
] | 2024-01-18T15:28:43Z |
2024-01-18T16:30:53+00:00
| 7 | 0 |
---
base_model: BAAI/bge-small-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: "ATED UIA IY OO ATE\nALVA YS TAL AIR,\n\nener oS. : Set eran one\n[fhe 1ST\
\ AD CATT\n\nZee cae FF +47 wer: 2.3 te eet wae\nasst : ie ix ® we! {rags\nSeShee\
\ Biss Mae 76 ths) a0 art SeRlee pt? Seas\n\nwe Gh caste: Geass seen Ev cy, te\
\ 2 ees ergr eo. pe Ueesnee earn guest #2. ps\nui AS, SER REG REM Re S As ee ST\
\ es: TTR\n\n \n\nos 4 i <i 28 = peed rd\nwis a, O8.2 Fi 44, we weg Peis BQ Tos\
\ Ve Ae ee\n¢ ” z ss\nG2 ly whi ls = ‘ « te\n\n \n\n \n\n \n\nos (sz “ he i z\
\ : ‘a att =\ndey ax % Ms z. SoLeY She « «8 wis ‘ Ls\n. Sc oot - a - az : ide\n\
. ‘ « - o. > a\" .\n7 ee we be a es t~ + we pa bs 4 Sage sh 2\n‘ 2 Fos =a ; ;\
\ <\nS| yeu c2a: fe 2 Beye Sy nee one ‘\nns - y A : ™ : ‘ -\nEF < af Gulte ey\
\ wat a .\n. et “ 7 4 4%\nos ma ‘os $ ~ Sa me *t i-we *\ncyvey oe / oa it i '\
\ * s : as a. A , ite\nAS “heed oo = Dox eee. a at 02 my, * at eis “ekoee \"33\
\ “x MS ihe ae as Sos\nrit ‘i “ +5 ‘ 4 oy\nos = eve 4. oa OB ak\né % 24 z= ‘ -\n\
= yey i = ross e , 3 :\n~ = ‘ 4\n2 oe . ‘ wae\ntas t oat i * -\n‘ avi 23 :\nboys\
\ Be ; > SH\nCet Anke seats\nbe ‘ ot ee ; .\n»\n“J\n. :\nat\n7\nr ;\n\n \n\ni\
\ 7\n\n—A\n\nRTE 1H: -\n\nwie iit\nSia &\n\nfat\nat\ndi\nWy!\ni\n:\n\n>\na hl\
\ a pore yee me wenn nn emmnn\n\n2AM Re\n\n“wf,\n\n \n\natak\na. wen\n\n¢\n\n\
z\nEE OEE EE EE ER we eH TO Oe\n\n=\nsd\n\n.\n\n«\n\n(\n-\nSines Bess pitebia\
\ suassorsegteicd.\n\naye\n=r\n\f"
- text: " \n \n\nSLNO EQUIPMENT NAM ‘UNIT | EQPTCODE | ‘&SLNO | LOCATION\n\n \n\
\ \n\n \n\n \n \n \n\n \n\n \n \n \n\n \n\n \n\n \n\n \n \n\n \n\
\n \n\n \n\n \n \n \n \n\n \n\n \n \n \n\n \n\n \n\n \n \n \n \n\
\ \n \n \n \n \n \n \n \n\n \n\n \n\n[ee CHP#1&2 | 1- sao\
\ i cHPreN 01 | Pc ILMS FUROR = TON UECAMIL/FHP/22/1 cna\nRCC AEC TRI puppies\
\ | 2 ad PCH- 2 Net ean? al CRUSHER ens | wecnmuremmne | 17-0020\n\neo MONnOST\
\ crema | 1-HOIST-7.5T-PCH- = Ree nooronenes | 7 UEC/HIL/FHP/22/3 \"17-01-2022\
\ 16-01-2023\n| MONO Cup#182 | 1-HOIST-6T-SCH > ee ‘| Si ecoNDaRy | 6.0 TON |\
\ UEC/HIL/FHP/22/4 | 17-01-2022 | 16-01-2023\n\n| Wowonat aca cHP#1a2 | 1-HOIST10T-SCH\
\ |\" y ae 5 | crue aOR 10 TON UEC/HIL/FHP/22/5 | 17-01-2022 | 16-01-2023\n| a\
\ | MONNOST CHP#3 , 3-SCH-HOIST-7.5T-A | ID ‘snort ooh ance . * TON | UEC/HIL/FHP/22/7\
\ 17-01-2022 16-01-2023 |\n| 8 | See om |e -SCH-HOIST-7.5T-A | = Cot oe wdc, Oia\
\ Bes a | TS TON | UEC/HIL/FHP/22/8 17-01-2022 16-01-2023\n~~ | MONORAIL ELECTRIC\
\ TIDNO:CHP/EMH-07 | SCH, CRUSHER 012022 | cananis\n\n \n \n \n \n \n\
\ \n \n \n \n \n\nFLOOR, OVER SCAB | 5.0 TON UEC/HIL/FHP/22/9\n\nSCH, CV402A/B\n\
\n \n\n \n\n \n\n \n \n \n \n\nL 9 | HOIST CHP#4 -SCH-HOIST-5.0T-B | S|. No:\
\ H-971/ 971A\n| | MONORAIL ELECTRIC 4s To ID NO: CHP/EMH-07\n\n \n\n \n \n\
\ \n \n \n \n\n \n\nHOIST\n\n10 ee CHP#4 | 4-SCH-HOIST-20T-B gy Ng: H-970\
\ se eran “20 2.0TON | UEC/HIL/FHP/22/10 | 17-01-2022 | 16-01-2023\na See oO a\
\ ipODCVA0SAE | _ —-—\n02,\n| 14 | MONORAILELECTRIC | Cupieg | 4-tp2-Hoist-20T\
\ DNO-* ee HEAD PULLEY | 20TON | UEC/HIL/FHP/22/11 | 17-01-2022 16-01-2023\n\n\
L HOIST-2.0T FLOOR\n\nTUNNEL,OVER SUMP\nCHAIN PULLEY BLOCK | CHP #1 NA | aera\
\ 1 TON\n| aa T Vv\nCHAIN PULLEY BLOCK | CHP #1 ee Fa\n\n14 CHAIN PULLEY BLOCK\
\ | CHP #1 | 10415 i ‘ = 1 TON\n\n- L 7 ; i oe & VFA _\n\n14-01-2023\n\nUEC/HIL/FHP/22/12\
\ | 15-01-2022\n\nUEC/HIL/FHP/22/13 | 15-01-2022\n\n \n \n\n14-01-2023\n\n \n\
\n|\nUEC/HIL/FHP/22/14 | 15-01-2022 | 14-01-2023\n\n \n\n \n \n\n \n\n \n\f"
- text: " \n\nTOTAL\n11\n\n- wl et\n\nSUPERVI\nSOR\n\n7 ce\n\n \n\nnly\nAIN|A\
\ ale\nSale\nlale ld\nSo\n\n \n\n \n\n:\n\n9 wij im\n\n \n\n \n\n \n\naes 3513\n\
sIB|e\nalg\nalg\n\nNTN\n\na 2 3 ; 3\ngle\n\no\n\nri\n\n \n \n\n \n\n \n\f"
- text: " \n\nBasic Value Ne arte me °\n\ngee 339980\" i\n\nO |- 4 Jo} ©: :\nRot oct\
\ DW. 159.)\n\nBS! < gum v= [AAPG\nPF - estat OE Boy S*\nWISC. DED__ssssssnens\n\
\nlet Payable 3 ¢) TS a\n\ntees ee\n\n \n\f"
- text: "Deepak Singh\n\nFrom: Swapnil Dixit <[email protected]>\n\n\
Sent: 18 August 2021 16:48\n\nTo: Deepak Singh\n\nCe: Shree Nath Mishra; Pranjal\
\ Pathak; Prashant Shripad Nagraj; Kirtiraj Jilkar; Pranjal\nPathak; Arun Kumar\
\ Singh; Ravi Kumar; Nishant Shah; Vidyanath Jha\n\nSubject: RE: Agenda for next\
\ AOH review.\n\n \n\n \n \n\nCAUTION: This email originated from outside of\
\ the organization. Do not click links or open attachments unless you recognize\n\
the sender and know the content is safe.\n\n \n\nDear Deepak Ji,\n\n“we thankfully\
\ acknowledge the receipt of your trailing mail and would like to confirm our\
\ acceptance of 4016- &\n322-man days for a period ( Jun to Dec 20 ) and ( Jan\
\ to April 21 ) respectively.\n\nRequest to proceed further in the matter and\
\ arrange to release the order at the earliest.\nRegards\n\nSwapnil Dixit\n\n\
From: Deepak Singh <[email protected]>\nSent: 18 August 2021 12:45\n\
To: Swapnil Dixit <[email protected]>\nCc: shree.mishra <[email protected]>;\
\ pranjal.pathak <[email protected]>; Prashant\nShripad Nagraj <[email protected]>;\
\ Kirtiraj Jilkar <[email protected]>;\npranjal.pathak <[email protected]>;\
\ arun.s <[email protected]>; Ravi Kumar\n\nw= <[email protected]>;\
\ Nishant Shah <[email protected]>; Vidyanath Jha\n<[email protected]>\n\
Subject: RE: Agenda for next AOH review.\n\nCAUTION: This email originated from\
\ outside the organisation. Do not click on any links or attachments\n_ unless\
\ you recognise the sender and know the content is safe. Forward suspicious mails\
\ to Information\n— Security Team.\n\nSwapnil ji;\nKeeping the discussion, we\
\ had in the meeting on 09-08-2021,our Team discussed later and following is the\
\ point-\n\n1. As per our procedure , we don’t count the day of Antigen Test as\
\ a part of Quarantine ,but at the same time\n| agree that Gate Pass processing\
\ was taking time beyond 02 days.\n“ So as a special case , for the period Jun\
\ 20 to Dec 20 ,we are considering your request of counting the\nAntigen Test\
\ day as a part of Quarantine .Hence total Quarantine Days for that period will\
\ be 4016 mandays.\n2. For the period Jan 21 to Apr 21,we have streamlined our\
\ Gate Pass Process and delivered the Gate Pass in\n02 days .So for the same period\
\ ,we are not considering the day of Antigen test as a part of Quarantine .\n\
Verified Man-days along with Mr. Gaurav of M/S Thermax is 322 Mandays.File is\
\ attached.\n\nKindly acknowledge so we proceed further .\n\nRegards\nDeepak\n\
\f"
inference: true
model-index:
- name: SetFit with BAAI/bge-small-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9976525821596244
name: Accuracy
---
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'GATE ENTRY PASS\n\nae Hirakud Power - 363 12\noe pame:- (44) al: ‘bx stavelen SI.No. :- / ‘5\n\nMe sane: bey i td Bate O$ 0S) 22\npals ony poe a 7 << Fe Shift :-\n\nApproved Man Power :- feel aes Pass No. :-\n\n \n\n \n\npat\n\npetan & sé dutity\n\x0c'</li><li>' \n\nLEPTH 2L09.49 Ling>@\n\nxP y\nALTAD. Catiima = —P\nDATE SEE COTUNG —RWOD_\n\n§ 26.09.17 ODM: + METAL PAD Cot ny\n\n74..9 4° o2 Wm. ‘+ -d- _ A:\n\naa 09 09. AZ OD. aw "le de. ——\\e-\n\npam 29-99-19 _—Surhoy\n\nBz. 09-19 01 we d~ etre < | <tr}\n\nme 1.x. 47:02 ule. TT par\n\nfuntig ® 02.X- 44 “ i\n\n63.X- 14 oD wm. +- metre pe ( 4d\n\naif oe AY OD wm aay Ae\n\n1 19 eit bc cols "\n\n49 02 Wem mety (4 Ue\n\nEe pdm\n\na | ct \\. 7 02 |NIm. +— (et ou (ony\n; 10° x. 44 ODUMer A _-A- —AAe\n1.X° 14 0% Vim T —A ae\nAp. .44 62MM. f— 4 he\n\n45 ..% 44 oS Une «\n\n14. 44 02 Wem he mete be cH\n15 -X. 44 02 pw Te A\n\n \n\n>. tee Aad el Al fh Saale ARR a * “er a\nEe ak Le\n\n|\n\n3 X tor Kw po fe of\n\nja arama inclined\nf\n\nAG s-44 02 Iam +e mt ro cots 4\nAp 44 0D Mm ro 4\n\n16°Yi44 62 Ww 7] 5\n\n{4 “ye O?_ Yirw * A a?\n\n-2o-x 4 | Sve oy\n\noy 84 oD we fe ot A ct\n\n1\\22.X 41402 Wm ge\n\n23x44 OD Vem, cp ode —.\n\n4.x 4 02 Wm a, -. Ae\n\n \n\n \n\n \n\nyy Uy yy\n\nX “3\n\x0c'</li><li>'Project Overview\n\n> 3 Audit engineers\n> 15+ Audit tools |\n\n|\n\n_» Consumed 50+ days with 750+ man hours |\n\n \n\n \n\n \n\nAt eh ae Project End\n\nee Nu 7 evan | 4 4 Nov 2021\n\nMeasurements\n\n= Compressor Operating Efficiency\n= Dryer Operating Efficiency\n\n= Pressure Drop analysis\n\n=" Compressor running pattern\n\n3/18/2022\nCopyright @ Atlas Copco Compressor Technique 2021\n\nReport\nDiscussion\n\n15" Dec-2021\n\n \n\nHindalco Industries Ltd\n\nMr. Rajiv Ranjan\n\nMr. Suraj Suna\n\nMr. Arvind Modi\n\nMiss Sujata Biswal\n\nMr. Prashant Swain\n\nMr. Sudhanshu Bhushan Mishra\nMiss Anjali Kumari\n\nOOoooooo0oo\n\nMr Mrutyunjaya Panda\n\nAtlas Copco India Ltd\nMr. Anil Pardhi\n\nMr. Bapan Ghosh\nMr. Bachhu Biswas\nMr. Nitesh Kumar Pal\nMr. Koustav Ghosal\nPanchu Saha\n\nOOooooo\n\nPremnath Sau\n\nShanks te afl the invelved team\nmembers fer making this he appen\n\nfitlas Copco\n\x0c'</li></ul> |
| 2 | <ul><li>" \n\nSAMALESWARI CONSTRUCTION\n\nAT-BUDAKATA , PO- GADAMUNDA\nHIRAKUD, DIST: SAMBALPUR\ndetails of receipient (billed to )\nHINDALCO INDUSTRIES LTD.\nHIRAKUD POWER ,\n\n \n \n\n \n\nMOBILE NO. : 9178245293\n\n \n \n \n \n\n \n\n \n \n\n \n\nTAX INVOICE\n(ISSUEDUNDER RULE 46 OF GST/OGST RULE,2017)\n\n \n \n \n \n\nSAMBALPUR -768016\n\n \n\nINVOICE NO. SC/AP/772/2020\n\n \n \n \n \n\n \n \n \n\n21\n21AAACH1201R1ZZ\nAAACH1201R\nDETAILS OF COSIGNEE (SHIPPED }\nHINDAL CO INDUSTRIES LTD\nHIRAKUD POWER\n\n|AT/PO HIRAKUD\n\nDIST SAMBALPUR-768016, ODISHA .\n\nINVOICE DATE 17.08.2020\n\n \n \n\nPO. NO. pypo/sav/1920/ ae 4:\n\n \n \n \n\n \n \n \n\n \n\nPO DATE\n\n \n \n\n \n\nDATE OF ISSUE\n\n \n \n \n\n \n \n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nPERIOD OF SERVICE 01.07.2020 to 31.07.2020\nSL HSN/SAC |\nNO. SERVICE DESCRIPTION CODE |QuANTITy| uom RATE | TAXABLE VA\n1]SC733760 |\ni\n| | i\nASH TRANSPORTATION FOR\n| UTILISATION IN THE ROAD | i | i\n' CONSTRUCTION DISTANCE-TO & FRO | i i |\n35 KM TO 40 KM\n9967 363 MT 135.32 49,121.16\n.\n| |\n_— eotes\nTOTAL 49,127.16 |\nWHETHER THE TAX IXS PAYABLE ON REVERSE CHARGE BASIS :- ves/**\nADD: CGST I\nADD: SGST\n[ADD:AIGST\nINVOICE TOTAL it 49,121.16\nTOTAL INVOICE VALUE (iN FIGURE ) 49,121.16 |\nTOTAL INVOICE VALUE (IN WORDS ) FOURTYNINE THOUSAND ONE HUNDRED TWENTYONE ONLY\nCERTIFIED THAT THE PARTICULARS GIVEN ABOVE ARE TRUE AND CORRECT AND THE AMOUNT INDICATED:\nJREPRESENTS THE PRICE ACTUALLY CHARGED BY US AND THERE IS NO FLOW OS ADDITIAL CONDITION DIRECTLY OR INDIRECTLY FROM THE BUYER.\nSTATE CODE 21 FOR SAMALESWARI CONSTRUCTION\nOUR GST NO 21AAMFS1669R1ZI\nOUR PAN AAMFSi669R =\n(E&0.E.) AUTHORISED SIGN.\nLO] - pe216° - weed eh WN\n\np770f arn [1029\n\x0c"</li><li>'| St. Mansur Ai Mo, 0971551785\n\n \n \n\n \n \n \n \n\n \n \n \n \n\n \n \n \n\n° ee, 09861521380\nAt- Hirakud, Dist- Sambalpur, Odisha-768016\n(ORIGINAL FOR RECEIPIENT)}\nDETAILS OF RECEIPIENTS (BILLED TO) TAX INVOICE\nHINDALCO INDUSTRIES LIMITED ISSUED UNDER RULE 46 OF CGST / OGST RULES, 2017) |\nHIRAKUD POWER\nAT/PO- HIRAKUD, INVOICE NO: SCM -21F/19\n\nDIST- SAMBALPUR-768016, ODISHA\n\n \n \n\nINVOICE DATE: | - 6-19\nPONO.: P/Po] SRu/1819/e454\n\n \n \n \n \n \n\n \n \n\n \n \n \n \n\nDETAILS OF CONSIGNEE (SHIPPED TO) PODATE: 22-17\\4\nHINDALCO INDUSTRIES LIMITED\nHIRAKUD POWER\n\nAT/PO- HIRAKUD, si\nglen PERIOD OF SERVICE: 1G -S-\\4 40 1S-G-l9\n\nSerene eee Pere fel ee\n\nmoy [243:9t [SFI AG\n12321 °>°\n\nas. [bs qq2 88\n‘ : $296\n\n \n \n\nDATE OF ISSUE :\n\n \n \n\n \n\n \n\nof Ries, As ear be Ghredave ef Clot\n\nS92. Wer ,\n\n \n \n\n \n\n \n \n \n\nTotal Invoice value (In Figures)\nTotal Invoice value (in words) Rupees TW) dy 4OO AKA Paw he\n\n \n\nCY ERYK AIO AGA EAD Ate\nCERTIFIED THAT THE PARTICULARS GIVEN ABOVE ARE TRUE AND CORRECT AND THE AMOUNT INDICATED REPRESENTS THE PRICE ACTUALLY\nCHARGED BY US AND IS NO FLOW OS ADDITIAL CONDITION DIRECTLY OR INDIRECTLY FROM THE BUYER.\n\nFor Sk. Mansur Ali\n\nMf\nAuthorized Signatory\n\n \n\n \n \n\n \n \n\nOur GSTIN : 2IALGPA5G675EIZI\n\n \n \n\nOur PAN: ALGPASG75E\n\n \n\x0c'</li><li>'ORIGINAL FOR RECIPIENT\n\n \n\nGST INVOICE\n\nDETAILS OF RECIPIENT\nM/s: HINDALCO INDUSTRIES LTD\n\n \n \n \n\n \n \n\nSUPPLIER DETAILS\nKRISHNA POWER CONSTRUCTION (P) LTD.\n\nINVOICE DETAILS\nINVOICE NO : KPC/GST-107/19-20\n\n \n \n\n \n\n \n\n \n \n \n\nAT - ANANDANAGAR, PO - HAKIMPADA, HIRAKUD POWER DATE OF ISSUE : 01/03/2020\n= CPLODIST - ANGUL - 759143, ODISHA AY - HIRAKUD, DIST- SAMBALPUR-768016, ODISHA PO NO: P/PO/SRV/1920/0396\nGSTIN- 21AABCK8889M2ZW GSTIN- 21AAACH1201R1ZZ PO DATE : 13/08/2019\n\n \n \n \n\n \n \n\n \n \n\nCIN- U452030R1995PTC003811\n\n \n \n \n\n \n \n\n \n \n\nName of the work: Bill for Off line Maintenance of all 132KV & 220KV Switchyard Equipment, Insulators and Overhead conductors for the month of FEBRUARY-2020.\nTaxable value\n\n. : HSN/ Amount\nDescription of Service Qty.\n\n01 01 |Off line maintenance for 132KV & 220KV\nSwitchyard Equipments, Insulators and 9954 | MON 0.8816 150,000.00 132,240.00 | 9.00% 11,901.60 156,043.20\nOverhead Conductors.\n\na)|For Supervisor (21 days Present) in\nFEBRUARY - 2020 (46,000/25)x21days=\n38,640.00\n\nb)|For Fitter -I (22 days Present) in FEBRUARY -\n2020 (26,000/25)x22days= 22,880.00\n\nc)|For Fitter -I| (21 days Present) in FEBRUARY\n2020 (26,000/25)x21days= 21,840.00\n\nd)}For Electrician - | (22 days Present) in\nFEBRUARY-2020 (26,000/25)x22days=\n22,880.00\n\ne)|For Electrician - 11 ( Full Present) in\nFEBRUARY-2020= 26,000.00\n\nTOTAL 132,240.00 | |\n132,240.00 | - |\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n\nAmount\n(In Rs.)\n\n \n \n\nTotal Amount\n(In Rs.)\n\n \n \n\n \n \n \n\n \n\n11,901.60\n\n \n\n \n \n\n11,901.60 11,901.60 | 156,043.20\n11,902.00 | - | 11,902.00] 156,044.00\n\n(Rupees One Lakh Fifty Six Thousand Forty Four Only)\n\nFor KRISHNA POWER CONSTRUCT]\nE. & O.E.\n\n \n \n\x0c'</li></ul> |
| 1 | <ul><li>' \n\n \n\nGSTIN: 21AAACH1201R1ZZ\nDUSTRIES LIMITED\nHINDALCO IN eee .\nHIRAKUD POWER, HIRAKUD-768 016.DIST.SAMBALPUR (ODISHA) GST Rangeldivision: Sambelpur\nPHONE: 0663-2481365, FAX: 0663-2481342 GST Commissionerate -Cuttack\nPURCHASE ORDER\n‘AMENOMENT Z\nVendor Code: J123 P.O/No: P/PO/SRV/1920/1161 Date: 27-MAR-2020\nMis JAIDURGA CONSTRUCTION Rete ee Dater04-MAY-2020\n‘Order Type: PURCHASE ORDER\nBUDHAKATA, Effective From 01/03/2020 To 31/03/2021\nGADMUNDA Price Basis\nHIRAKUD i a ;\nMB, ISSA, 768011 ransportation arrangement\nSEA PUR OR SSN NOR roomie Ship to Location HIRAKUD - POWER\nEmail: [email protected] Carrier\nFax:() Currency 2 INR\nContact: DILIP PRADHAN () 9438452293 Hindalco Contact Person: SIDDHARTH KUNDA,\nGSTIN: 21AACFJ4294P122 —_State:21- Odisha Email of Contact Person: [email protected]\nRef: ASH TRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT\nOrder Unit of Rate/Unit Value\nSl Stock No. & Descfiption ‘Quantity Measurement (Rs) (Rs)\n1 sera’ HSNISAC: 3600.00 MT 126.00" 4536000.00\nASH TRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT CCST [email protected]% 113400.\nDISTANCE TO & FRO 26KM TO 40KM Set Tego ve\nCO case Ss Gaaey SGST [email protected]% 113400.00\n36000.000 Need By: 31-MAR-2021 RCM CGST Tax@25% — -113400.00\n‘Supplier tom. DR RS.67 164. TR 27.03.20 RCM SGST [email protected]% ~113400.00\ner tem Total: —_-4536000.00\n2 _Sc1750_ HSN/SAC: 200.000 MT _7200_¥~ 144000.00\nASH TRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT 1 ~ 3600.\nDISTANCE TO & FRO 11KM TO 15KM cease on\n= ees SGST [email protected]% 3600.00\n200,000 ‘Need By: 31-MAR-2021 RCM CGST [email protected]% -3600.00\nSupplier tem. D.R.RS.67.16/ TR 27 03 20 RCM SGST [email protected]% -3600.00\ntem Tota: 144000.00\n3 sciTsa HSNISAC: 2000.00 MT 96.00 192000.00\nCC Code Quantity SGST [email protected]% 4800.00\n200.000 Need By: 31-MAR-2021 | RCM CGST [email protected]% 4800.00\n‘Supplier tem: D.R RS.67.AG/LTR 27.03.20 RCM SGST [email protected]% ~4800.00\noo lem Totat: |” 192000.00\n4 sci74s < HSNISAC: 10000.000 MT 56.00 “ —560000.00\nTRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT ae\nDISTANCE TO & FRO 6KM TO 10KM CT ISEESS pictase\ncG.0ode ae Caatiy SGST [email protected]% 4000.00\n10000.000 Need By: 31-MAR-2021 RCM CGST [email protected]% -14000.00\nSupplier tem: O.R. RS 67 18/.TR 27.03.20 ROM SGST [email protected]% -14000.00\n‘tem Tota: $6000.00\n5 _scizg eal HSNISAC: 1500.000 MT 100.00 ~~ 150000.00\nASH TRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT <G25%\nDISTANCE TO & FRO 26KM TO 30KM CeStrmee Same\nCC Code ———"" Quantity SGST [email protected]% 3750.00\n+500 000 Need By: 31-MAR-2021 ROM CGST [email protected]% “3750.00\n‘Supplier Hem D.R.RS 87 16/1 TR 27.03.20 RCM SGST [email protected]% 3750.00\ntem Total: +150000.00\nRegd. Office:\nHINDALCO INDUSTRIES LIMITED\nAhura Centre. 1st Floor, B Wing, Mehakali Caves Road Andhon(East), Mumbai 400083, india, Page 10813\n\nTel No: 91 22 6691 7000. Email: [email protected], Website: werw.adityabiria com\nCorporate identity No: L27020MH1958PLC011238\n\n \n\x0c'</li><li>"_HINDALCO INDUSTRIES LIMITEL\n\n \n\nHIRAKUD POWER, HIRAKUD-766 016,.DIST.-SAMBALPUR (ODISHA) aur —\nPHONE: 6563-2481365, FAX: 0863-2481342 GST Commissionerate -Cuttack\nPURCHASE ORDER\nAMENDMENT\nVendor Code: BS75 P.O/No: P/POISRV/21Z2/0085 Date: 05-MAY-2021\nM/s BGM INFRACORE Revise No; 2 eae eae\n\nOrder Type: PURCHASE ORDER\n\nAT- HOUSE NO.2008 Effective From 01/05/2021 To 31/03/2022\n\nOPPO. VARUN PLAZA\n\nPrice Basis\nAINTHAPALI ROAD .\nSAMBALPUR, ODISHA. IN 768004 . Transportation arrangement :\nShip to Location > HIRAKUD - POWER\nEmail: [email protected] Carrier :\nFax: () Currency > INR\nContact: SIDHARTH SANKAR MISHRA (+91) 8439209679 Hindaico Contact Person :SIDDHARTH KUNDA,\nGSTIN: 21CONPM52S4KiZG © State:27 - Odisha Emait of Contact Person: [email protected]\na\nOrder Unit of Rate/Unit Value\nSi. Stock No. & Description Quantity Measurement {Rs.) {Rs.)\n\n \n\n \n\n \n\n \n \n\nSCOG1S HSN/SAC: 2831.00 141550.00\n\n \n\n \n\n \n\nP.C.C{1:3:6)WITH 40 M.M.DOWN MATERIAL INCLUDING ALL MATERIAL , CGST Tax@9% 42739.50\nSCAFFOLDING BUT EXCLUDING ONLY CEMENT\nn6 Gaak Gentile SGST Tax@o% 12739.50\n50.000 Need By: 31-MAR-2022 item Total: 467029.00\n7 scoogo HSNISAC: 50.000 CUM 114.00 5700.00\nEARTHWORK IN BACKFILLING INCLUDING WATERING AND COMPACTION AT CGST Tax@e% 513.00\nALL ELEVATIONS AROUND FOUNDATIONS, PITS, AND IN PLINTH FILLING AND\nAREA FILLING AS PER SPECIFICATIONS AND DRAWINGS AND AS DIRECTED SGST Tax@9% 513.00\nBY THE ENGINEER WITH SELECTED SPOIL OBTAINED FROM THE STACKS :\nWITHIN A LEAD OF 160 METRES (EACH LAYER OF FILLING TO BE ttem Total: 8726.00\nCOMPACTED TO ACHIEVE THE LEVEL OF COMPACTION REQUIREMENTS AS\nSPECIFIED IN THE SPECIFICATION.)\nCC Code Quantity\n50.000 Need By: 31-MAR-2022\n8 scos72 HSN/SAC: 100.000 CUM 726.00 72600.00\nPROVIDING SAND FILLING IN LAYERS iN FOUNDATION / TRENCHES AND CGST Tax@o% 6534.00\nCOMPACTION BY WATER AND RAMMING BE Sean es34.00\nCC Code Quantity me ’\n400.000 Need By: 31-MAR-2022 item Total: 85668.00\n9 $C1198 HSN/SAC: + 25.000 MTR 75.00 1875.00\nSUPPLYING AND FILLING THE EXPANSION JOINTS WITH 12 MM THERMOCGL CGST Tex@9% 468.75\nSHEET, TOP PORTION (12MM X 12MM) WILL BE FILLED WITH BITUMEN/OTHER ne xeuee\nJOINT SEALING COMPOUND INCLUDING THOROUGH CLEANING OF THE SGST Tax@9 .\nJOINTS, AS PER MANUFACTURER'S SPECIFICATION AND AS DIRECTED BY —_—\nTHE ENGINEER. ttem Totak 2212.50\nCC Code Quantity\n25.000 Need By: 34-MAR-2022\n10 $C1195 HSNISAC: 50.000 CUM 610.00 30500.00\nDISMANTLING AND DEMOLISHING EXISTING STRUCTURES AT ALL LEVELS CGST Tax@9% 2745.00\nINCLUDING STACKING THE MATERIALS AS DIRECTED BY ENGINEER WITHIN A -y assis\nLEAD OF 1 KM. A) BRICK WORK SGST Tax@9% 45.\nCC Code Quantity shane renee re nore lien\ni an Necd By DNSs item Total: 35990.00\n\ni i ll\nRegd. Office: e\nHINDALCO INDUSTRIES LIMITED Pane} of 49\n\nAhura Centre, 1st Floor, B Wing, Mahakali Caves Road Andheri(East), Mumbai 400093, India, age\n\nTe! Na: 91 22 6691 7000, Email: [email protected], Website: www.adityabirla.com\n\nCorporate Identity No: L27020MH1958PLC011238\n\x0c"</li><li>"GSTIN: 21AAACH1201R12Z\nHINDALCO INDUSTRIES LIMITED State Code: 21 - Odisha\n\nGST Range/sivision:\n\n \n\n_ HIRAKUD POWER, HIRAKUD-766 016,DISTSAMBALPUR (ODISHA)\n\n \n\n \n\n: Hl Sambalpur 5\nPHONE: 0663-2481365, FAX: 0663-2481342 GST. Commissionorate ¥cuitack\nPURCHASE ORDER\nVendor Code: U138 P.OINo: P/PO/SRV/2122/0782 Date: 31-DEC-2021\nM/s UTKAL CONSTRUCTION Revise No: Date\n(Order Type: PURCHASE ORDER\nAT-MALIPADA Effective From 01/01/2022 To 3112/2022\nPO-HIRAKUD pee oe\nSAMBALPUR, ORISSA, IN 768 016 Fransportation arrangement\n¥ Ship to Location : HIRAKUD - POWER\nEmail: [email protected] Carrier\nFax: () Currency INR\nContact: ANTARYAM! PANDA (+91) 9861236394 Hindalco Contact Person: SIODHARTH KUNDA,\nGSTIN: 21ANPPP5428H1ZU — State:21 - Odisha Email of Contact Person: sidharth [email protected]\nRef, MANPOWER LIS 7 ia\nOrder Unit of Rate/Unit Value\nSl. Stock No. & Description Quantity Measurement (Rs) (Rs.)\n1 U9tt1 16010026 HSNISAC: 365.000 MDY 4252.44” — 1§52140.60\nPROVIDE: HOUSEKEEPING; TASK TYPE: CLEANING, SCOPE: CLEANING OF Tax 39692.6:\nUNIT 4 POWER HOUSE BUILDING FROM TOP TO BOTTOM INCLUDING SWITCH Sees ween es\nYADR, TOOLS PROVIDED BY: HIL, MATERIAL PROVIDE BY. HIL SGST Tax@9% 139692.65 /\nce Code Quantity SSS oe\nae, Need By: 31-DEC-2022 tem Total ——-1831525.90\nSiento $18 27S. / e\n\nPayment Terms: 15 Days Credit 1831525.90\n\n \n\nBilling Address: Hindalco Industries Ltd, Hirakud Power, Hirakud - 768016, Sambalpur (ODIsHA)\n(Hard copy of bills to be forwarded at shipping address)\nShipping address : Hindalco Industries Ltd, Hirakud Power, Hirakud - 768046, Sambalpur (ODISHA)\n\nNotes\n\n1 THE ABOVE PRICE IS BASED ON ATTACHED LTS CALCULATION\n2. AS PER LTS, WAGES WILL BE PAID FOR NATIONAL HOLIDAYS & FESTIVAL HOLIDAYS, HOWEVER INCASE A LABOUR IS REQUIRED TO PERFORM\n\nBON TOTAL BILL VALUE WILL BE RETAINED BY THE COMPANY EVERY MONTH AND Wit BE RELEASED ANNUALLY ONLY AFTER PAYING\nBONUS & OTHER DUES TO THE LABOURERS ENGAGED 8Y YOU\n\n7.IT IS MANDATORY TO PUT GST NUMBER IN THE INVOICE\n\n& MANPOWER SHOULD BE DEPLOYED AS PER REQUIREMENT OF THE MANAGEMENT\n\n9, LABOUR CESS 1% WILL BE PAID EXTRA, IF APPLICABLE\n\nOTHER TERMS.-\n\n1. JOB SHALL BE EXECUTED AS PER THE SCHEDULE ISSUED BY THE DEPARTMENT (TO BE COLLECTED IN ADVANCE).\n\na IOU/ CONTRACTOR'S SUPERVISOR SHALL MEET THE DEPARTMENTAL SUPERVISOR ce REGULAR INTERVAL DECIDED BY THE DEPARTMENT TO\nCONFIRM CHANGE IN SCHEDULE, IF ANY.\n\n3. YOU SHALL SUBMIT THE DAILY REPORT AFTER COMPLETION OF JOB IN THE PRESCRIBED FORMAT AS DIRECTED BY THE DEPARTMENT.\n\n& ROAXTRA CLAIM TO BE MADE FOR CHANGE IN SCHEDULE FOR SIMILAR TYPE OF JOB\n\nS_RGFUSAL OF ANY JOB SIMILAR TO THE JOB ENLISTED IN THE ORDER OR FAILURE TO EXECUTE THE JOB AS PER SCHEDULE WILL BE LIABLE FOR\n\n \n\nRegd. Office:\nHINDALCO INDUSTRIES LIMITED\n\n‘Ahura Centre, ‘st Floor, B Wing, Mahakali Caves Roag Ancheri(East). Mumbai 400093, incia, Page 1 of 13\nTel No: 91 22 8691 7000, Email: [email protected], Websile: www adityabuta cont\n\nCorporate Identity No: L27020MH1958PLC011238\n\n \n\n \n\nKw\n\x0c"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9977 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Gopal2002/SERVICE_LARGE_MODEL_ZEON")
# Run inference
preds = model("
TOTAL
11
- wl et
SUPERVI
SOR
7 ce
nly
AIN|A ale
Sale
lale ld
So
:
9 wij im
aes 3513
sIB|e
alg
alg
NTN
a 2 3 ; 3
gle
o
ri
")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 1 | 225.8451 | 1106 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 267 |
| 1 | 74 |
| 2 | 85 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0003 | 1 | 0.3001 | - |
| 0.0164 | 50 | 0.2586 | - |
| 0.0328 | 100 | 0.1809 | - |
| 0.0492 | 150 | 0.0534 | - |
| 0.0656 | 200 | 0.0285 | - |
| 0.0820 | 250 | 0.0144 | - |
| 0.0985 | 300 | 0.0045 | - |
| 0.1149 | 350 | 0.0281 | - |
| 0.1313 | 400 | 0.0432 | - |
| 0.1477 | 450 | 0.0045 | - |
| 0.1641 | 500 | 0.0023 | - |
| 0.1805 | 550 | 0.0022 | - |
| 0.1969 | 600 | 0.0011 | - |
| 0.2133 | 650 | 0.0008 | - |
| 0.2297 | 700 | 0.0226 | - |
| 0.2461 | 750 | 0.0009 | - |
| 0.2626 | 800 | 0.0008 | - |
| 0.2790 | 850 | 0.001 | - |
| 0.2954 | 900 | 0.001 | - |
| 0.3118 | 950 | 0.001 | - |
| 0.3282 | 1000 | 0.0007 | - |
| 0.3446 | 1050 | 0.0012 | - |
| 0.3610 | 1100 | 0.0008 | - |
| 0.3774 | 1150 | 0.0008 | - |
| 0.3938 | 1200 | 0.0008 | - |
| 0.4102 | 1250 | 0.0034 | - |
| 0.4266 | 1300 | 0.0007 | - |
| 0.4431 | 1350 | 0.0007 | - |
| 0.4595 | 1400 | 0.0008 | - |
| 0.4759 | 1450 | 0.0007 | - |
| 0.4923 | 1500 | 0.0004 | - |
| 0.5087 | 1550 | 0.0005 | - |
| 0.5251 | 1600 | 0.0007 | - |
| 0.5415 | 1650 | 0.0005 | - |
| 0.5579 | 1700 | 0.0005 | - |
| 0.5743 | 1750 | 0.0004 | - |
| 0.5907 | 1800 | 0.0009 | - |
| 0.6072 | 1850 | 0.0025 | - |
| 0.6236 | 1900 | 0.0003 | - |
| 0.6400 | 1950 | 0.0023 | - |
| 0.6564 | 2000 | 0.0004 | - |
| 0.6728 | 2050 | 0.0045 | - |
| 0.6892 | 2100 | 0.0005 | - |
| 0.7056 | 2150 | 0.0109 | - |
| 0.7220 | 2200 | 0.0003 | - |
| 0.7384 | 2250 | 0.0021 | - |
| 0.7548 | 2300 | 0.0005 | - |
| 0.7713 | 2350 | 0.0004 | - |
| 0.7877 | 2400 | 0.0118 | - |
| 0.8041 | 2450 | 0.0003 | - |
| 0.8205 | 2500 | 0.0003 | - |
| 0.8369 | 2550 | 0.0126 | - |
| 0.8533 | 2600 | 0.0004 | - |
| 0.8697 | 2650 | 0.0162 | - |
| 0.8861 | 2700 | 0.0003 | - |
| 0.9025 | 2750 | 0.0004 | - |
| 0.9189 | 2800 | 0.0005 | - |
| 0.9353 | 2850 | 0.0004 | - |
| 0.9518 | 2900 | 0.0032 | - |
| 0.9682 | 2950 | 0.0003 | - |
| 0.9846 | 3000 | 0.0004 | - |
| 1.0010 | 3050 | 0.0003 | - |
| 1.0174 | 3100 | 0.0003 | - |
| 1.0338 | 3150 | 0.0019 | - |
| 1.0502 | 3200 | 0.0194 | - |
| 1.0666 | 3250 | 0.0003 | - |
| 1.0830 | 3300 | 0.0004 | - |
| 1.0994 | 3350 | 0.01 | - |
| 1.1159 | 3400 | 0.0002 | - |
| 1.1323 | 3450 | 0.0003 | - |
| 1.1487 | 3500 | 0.0004 | - |
| 1.1651 | 3550 | 0.0004 | - |
| 1.1815 | 3600 | 0.0002 | - |
| 1.1979 | 3650 | 0.0005 | - |
| 1.2143 | 3700 | 0.0002 | - |
| 1.2307 | 3750 | 0.0019 | - |
| 1.2471 | 3800 | 0.0003 | - |
| 1.2635 | 3850 | 0.0048 | - |
| 1.2799 | 3900 | 0.013 | - |
| 1.2964 | 3950 | 0.0031 | - |
| 1.3128 | 4000 | 0.0002 | - |
| 1.3292 | 4050 | 0.0024 | - |
| 1.3456 | 4100 | 0.0002 | - |
| 1.3620 | 4150 | 0.0003 | - |
| 1.3784 | 4200 | 0.0003 | - |
| 1.3948 | 4250 | 0.0002 | - |
| 1.4112 | 4300 | 0.003 | - |
| 1.4276 | 4350 | 0.0002 | - |
| 1.4440 | 4400 | 0.0002 | - |
| 1.4605 | 4450 | 0.0022 | - |
| 1.4769 | 4500 | 0.0002 | - |
| 1.4933 | 4550 | 0.0078 | - |
| 1.5097 | 4600 | 0.0027 | - |
| 1.5261 | 4650 | 0.0002 | - |
| 1.5425 | 4700 | 0.0002 | - |
| 1.5589 | 4750 | 0.0002 | - |
| 1.5753 | 4800 | 0.0002 | - |
| 1.5917 | 4850 | 0.0002 | - |
| 1.6081 | 4900 | 0.0118 | - |
| 1.6245 | 4950 | 0.0002 | - |
| 1.6410 | 5000 | 0.0002 | - |
| 1.6574 | 5050 | 0.0003 | - |
| 1.6738 | 5100 | 0.0003 | - |
| 1.6902 | 5150 | 0.0068 | - |
| 1.7066 | 5200 | 0.0003 | - |
| 1.7230 | 5250 | 0.0112 | - |
| 1.7394 | 5300 | 0.0002 | - |
| 1.7558 | 5350 | 0.0002 | - |
| 1.7722 | 5400 | 0.0003 | - |
| 1.7886 | 5450 | 0.0002 | - |
| 1.8051 | 5500 | 0.0002 | - |
| 1.8215 | 5550 | 0.0002 | - |
| 1.8379 | 5600 | 0.0002 | - |
| 1.8543 | 5650 | 0.0003 | - |
| 1.8707 | 5700 | 0.0047 | - |
| 1.8871 | 5750 | 0.0121 | - |
| 1.9035 | 5800 | 0.0003 | - |
| 1.9199 | 5850 | 0.013 | - |
| 1.9363 | 5900 | 0.005 | - |
| 1.9527 | 5950 | 0.0001 | - |
| 1.9691 | 6000 | 0.0002 | - |
| 1.9856 | 6050 | 0.0003 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'GATE ENTRY PASS\n\nae Hirakud Power - 363 12\noe pame:- (44) al: ‘bx stavelen SI.No. :- / ‘5\n\nMe sane: bey i td Bate O$ 0S) 22\npals ony poe a 7 << Fe Shift :-\n\nApproved Man Power :- feel aes Pass No. :-\n\n \n\n \n\npat\n\npetan & sé dutity\n\x0c'</li><li>' \n\nLEPTH 2L09.49 Ling>@\n\nxP y\nALTAD. Catiima = —P\nDATE SEE COTUNG —RWOD_\n\n§ 26.09.17 ODM: + METAL PAD Cot ny\n\n74..9 4° o2 Wm. ‘+ -d- _ A:\n\naa 09 09. AZ OD. aw "le de. ——\\e-\n\npam 29-99-19 _—Surhoy\n\nBz. 09-19 01 we d~ etre < | <tr}\n\nme 1.x. 47:02 ule. TT par\n\nfuntig ® 02.X- 44 “ i\n\n63.X- 14 oD wm. +- metre pe ( 4d\n\naif oe AY OD wm aay Ae\n\n1 19 eit bc cols "\n\n49 02 Wem mety (4 Ue\n\nEe pdm\n\na | ct \\. 7 02 |NIm. +— (et ou (ony\n; 10° x. 44 ODUMer A _-A- —AAe\n1.X° 14 0% Vim T —A ae\nAp. .44 62MM. f— 4 he\n\n45 ..% 44 oS Une «\n\n14. 44 02 Wem he mete be cH\n15 -X. 44 02 pw Te A\n\n \n\n>. tee Aad el Al fh Saale ARR a * “er a\nEe ak Le\n\n|\n\n3 X tor Kw po fe of\n\nja arama inclined\nf\n\nAG s-44 02 Iam +e mt ro cots 4\nAp 44 0D Mm ro 4\n\n16°Yi44 62 Ww 7] 5\n\n{4 “ye O?_ Yirw * A a?\n\n-2o-x 4 | Sve oy\n\noy 84 oD we fe ot A ct\n\n1\\22.X 41402 Wm ge\n\n23x44 OD Vem, cp ode —.\n\n4.x 4 02 Wm a, -. Ae\n\n \n\n \n\n \n\nyy Uy yy\n\nX “3\n\x0c'</li><li>'Project Overview\n\n> 3 Audit engineers\n> 15+ Audit tools |\n\n|\n\n_» Consumed 50+ days with 750+ man hours |\n\n \n\n \n\n \n\nAt eh ae Project End\n\nee Nu 7 evan | 4 4 Nov 2021\n\nMeasurements\n\n= Compressor Operating Efficiency\n= Dryer Operating Efficiency\n\n= Pressure Drop analysis\n\n=" Compressor running pattern\n\n3/18/2022\nCopyright @ Atlas Copco Compressor Technique 2021\n\nReport\nDiscussion\n\n15" Dec-2021\n\n \n\nHindalco Industries Ltd\n\nMr. Rajiv Ranjan\n\nMr. Suraj Suna\n\nMr. Arvind Modi\n\nMiss Sujata Biswal\n\nMr. Prashant Swain\n\nMr. Sudhanshu Bhushan Mishra\nMiss Anjali Kumari\n\nOOoooooo0oo\n\nMr Mrutyunjaya Panda\n\nAtlas Copco India Ltd\nMr. Anil Pardhi\n\nMr. Bapan Ghosh\nMr. Bachhu Biswas\nMr. Nitesh Kumar Pal\nMr. Koustav Ghosal\nPanchu Saha\n\nOOooooo\n\nPremnath Sau\n\nShanks te afl the invelved team\nmembers fer making this he appen\n\nfitlas Copco\n\x0c'</li></ul> |
| 2 | <ul><li>" \n\nSAMALESWARI CONSTRUCTION\n\nAT-BUDAKATA , PO- GADAMUNDA\nHIRAKUD, DIST: SAMBALPUR\ndetails of receipient (billed to )\nHINDALCO INDUSTRIES LTD.\nHIRAKUD POWER ,\n\n \n \n\n \n\nMOBILE NO. : 9178245293\n\n \n \n \n \n\n \n\n \n \n\n \n\nTAX INVOICE\n(ISSUEDUNDER RULE 46 OF GST/OGST RULE,2017)\n\n \n \n \n \n\nSAMBALPUR -768016\n\n \n\nINVOICE NO. SC/AP/772/2020\n\n \n \n \n \n\n \n \n \n\n21\n21AAACH1201R1ZZ\nAAACH1201R\nDETAILS OF COSIGNEE (SHIPPED }\nHINDAL CO INDUSTRIES LTD\nHIRAKUD POWER\n\n|AT/PO HIRAKUD\n\nDIST SAMBALPUR-768016, ODISHA .\n\nINVOICE DATE 17.08.2020\n\n \n \n\nPO. NO. pypo/sav/1920/ ae 4:\n\n \n \n \n\n \n \n \n\n \n\nPO DATE\n\n \n \n\n \n\nDATE OF ISSUE\n\n \n \n \n\n \n \n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nPERIOD OF SERVICE 01.07.2020 to 31.07.2020\nSL HSN/SAC |\nNO. SERVICE DESCRIPTION CODE |QuANTITy| uom RATE | TAXABLE VA\n1]SC733760 |\ni\n| | i\nASH TRANSPORTATION FOR\n| UTILISATION IN THE ROAD | i | i\n' CONSTRUCTION DISTANCE-TO & FRO | i i |\n35 KM TO 40 KM\n9967 363 MT 135.32 49,121.16\n.\n| |\n_— eotes\nTOTAL 49,127.16 |\nWHETHER THE TAX IXS PAYABLE ON REVERSE CHARGE BASIS :- ves/**\nADD: CGST I\nADD: SGST\n[ADD:AIGST\nINVOICE TOTAL it 49,121.16\nTOTAL INVOICE VALUE (iN FIGURE ) 49,121.16 |\nTOTAL INVOICE VALUE (IN WORDS ) FOURTYNINE THOUSAND ONE HUNDRED TWENTYONE ONLY\nCERTIFIED THAT THE PARTICULARS GIVEN ABOVE ARE TRUE AND CORRECT AND THE AMOUNT INDICATED:\nJREPRESENTS THE PRICE ACTUALLY CHARGED BY US AND THERE IS NO FLOW OS ADDITIAL CONDITION DIRECTLY OR INDIRECTLY FROM THE BUYER.\nSTATE CODE 21 FOR SAMALESWARI CONSTRUCTION\nOUR GST NO 21AAMFS1669R1ZI\nOUR PAN AAMFSi669R =\n(E&0.E.) AUTHORISED SIGN.\nLO] - pe216° - weed eh WN\n\np770f arn [1029\n\x0c"</li><li>'| St. Mansur Ai Mo, 0971551785\n\n \n \n\n \n \n \n \n\n \n \n \n \n\n \n \n \n\n° ee, 09861521380\nAt- Hirakud, Dist- Sambalpur, Odisha-768016\n(ORIGINAL FOR RECEIPIENT)}\nDETAILS OF RECEIPIENTS (BILLED TO) TAX INVOICE\nHINDALCO INDUSTRIES LIMITED ISSUED UNDER RULE 46 OF CGST / OGST RULES, 2017) |\nHIRAKUD POWER\nAT/PO- HIRAKUD, INVOICE NO: SCM -21F/19\n\nDIST- SAMBALPUR-768016, ODISHA\n\n \n \n\nINVOICE DATE: | - 6-19\nPONO.: P/Po] SRu/1819/e454\n\n \n \n \n \n \n\n \n \n\n \n \n \n \n\nDETAILS OF CONSIGNEE (SHIPPED TO) PODATE: 22-17\\4\nHINDALCO INDUSTRIES LIMITED\nHIRAKUD POWER\n\nAT/PO- HIRAKUD, si\nglen PERIOD OF SERVICE: 1G -S-\\4 40 1S-G-l9\n\nSerene eee Pere fel ee\n\nmoy [243:9t [SFI AG\n12321 °>°\n\nas. [bs qq2 88\n‘ : $296\n\n \n \n\nDATE OF ISSUE :\n\n \n \n\n \n\n \n\nof Ries, As ear be Ghredave ef Clot\n\nS92. Wer ,\n\n \n \n\n \n\n \n \n \n\nTotal Invoice value (In Figures)\nTotal Invoice value (in words) Rupees TW) dy 4OO AKA Paw he\n\n \n\nCY ERYK AIO AGA EAD Ate\nCERTIFIED THAT THE PARTICULARS GIVEN ABOVE ARE TRUE AND CORRECT AND THE AMOUNT INDICATED REPRESENTS THE PRICE ACTUALLY\nCHARGED BY US AND IS NO FLOW OS ADDITIAL CONDITION DIRECTLY OR INDIRECTLY FROM THE BUYER.\n\nFor Sk. Mansur Ali\n\nMf\nAuthorized Signatory\n\n \n\n \n \n\n \n \n\nOur GSTIN : 2IALGPA5G675EIZI\n\n \n \n\nOur PAN: ALGPASG75E\n\n \n\x0c'</li><li>'ORIGINAL FOR RECIPIENT\n\n \n\nGST INVOICE\n\nDETAILS OF RECIPIENT\nM/s: HINDALCO INDUSTRIES LTD\n\n \n \n \n\n \n \n\nSUPPLIER DETAILS\nKRISHNA POWER CONSTRUCTION (P) LTD.\n\nINVOICE DETAILS\nINVOICE NO : KPC/GST-107/19-20\n\n \n \n\n \n\n \n\n \n \n \n\nAT - ANANDANAGAR, PO - HAKIMPADA, HIRAKUD POWER DATE OF ISSUE : 01/03/2020\n= CPLODIST - ANGUL - 759143, ODISHA AY - HIRAKUD, DIST- SAMBALPUR-768016, ODISHA PO NO: P/PO/SRV/1920/0396\nGSTIN- 21AABCK8889M2ZW GSTIN- 21AAACH1201R1ZZ PO DATE : 13/08/2019\n\n \n \n \n\n \n \n\n \n \n\nCIN- U452030R1995PTC003811\n\n \n \n \n\n \n \n\n \n \n\nName of the work: Bill for Off line Maintenance of all 132KV & 220KV Switchyard Equipment, Insulators and Overhead conductors for the month of FEBRUARY-2020.\nTaxable value\n\n. : HSN/ Amount\nDescription of Service Qty.\n\n01 01 |Off line maintenance for 132KV & 220KV\nSwitchyard Equipments, Insulators and 9954 | MON 0.8816 150,000.00 132,240.00 | 9.00% 11,901.60 156,043.20\nOverhead Conductors.\n\na)|For Supervisor (21 days Present) in\nFEBRUARY - 2020 (46,000/25)x21days=\n38,640.00\n\nb)|For Fitter -I (22 days Present) in FEBRUARY -\n2020 (26,000/25)x22days= 22,880.00\n\nc)|For Fitter -I| (21 days Present) in FEBRUARY\n2020 (26,000/25)x21days= 21,840.00\n\nd)}For Electrician - | (22 days Present) in\nFEBRUARY-2020 (26,000/25)x22days=\n22,880.00\n\ne)|For Electrician - 11 ( Full Present) in\nFEBRUARY-2020= 26,000.00\n\nTOTAL 132,240.00 | |\n132,240.00 | - |\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n\nAmount\n(In Rs.)\n\n \n \n\nTotal Amount\n(In Rs.)\n\n \n \n\n \n \n \n\n \n\n11,901.60\n\n \n\n \n \n\n11,901.60 11,901.60 | 156,043.20\n11,902.00 | - | 11,902.00] 156,044.00\n\n(Rupees One Lakh Fifty Six Thousand Forty Four Only)\n\nFor KRISHNA POWER CONSTRUCT]\nE. & O.E.\n\n \n \n\x0c'</li></ul> |
| 1 | <ul><li>' \n\n \n\nGSTIN: 21AAACH1201R1ZZ\nDUSTRIES LIMITED\nHINDALCO IN eee .\nHIRAKUD POWER, HIRAKUD-768 016.DIST.SAMBALPUR (ODISHA) GST Rangeldivision: Sambelpur\nPHONE: 0663-2481365, FAX: 0663-2481342 GST Commissionerate -Cuttack\nPURCHASE ORDER\n‘AMENOMENT Z\nVendor Code: J123 P.O/No: P/PO/SRV/1920/1161 Date: 27-MAR-2020\nMis JAIDURGA CONSTRUCTION Rete ee Dater04-MAY-2020\n‘Order Type: PURCHASE ORDER\nBUDHAKATA, Effective From 01/03/2020 To 31/03/2021\nGADMUNDA Price Basis\nHIRAKUD i a ;\nMB, ISSA, 768011 ransportation arrangement\nSEA PUR OR SSN NOR roomie Ship to Location HIRAKUD - POWER\nEmail: [email protected] Carrier\nFax:() Currency 2 INR\nContact: DILIP PRADHAN () 9438452293 Hindalco Contact Person: SIDDHARTH KUNDA,\nGSTIN: 21AACFJ4294P122 —_State:21- Odisha Email of Contact Person: [email protected]\nRef: ASH TRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT\nOrder Unit of Rate/Unit Value\nSl Stock No. & Descfiption ‘Quantity Measurement (Rs) (Rs)\n1 sera’ HSNISAC: 3600.00 MT 126.00" 4536000.00\nASH TRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT CCST [email protected]% 113400.\nDISTANCE TO & FRO 26KM TO 40KM Set Tego ve\nCO case Ss Gaaey SGST [email protected]% 113400.00\n36000.000 Need By: 31-MAR-2021 RCM CGST Tax@25% — -113400.00\n‘Supplier tom. DR RS.67 164. TR 27.03.20 RCM SGST [email protected]% ~113400.00\ner tem Total: —_-4536000.00\n2 _Sc1750_ HSN/SAC: 200.000 MT _7200_¥~ 144000.00\nASH TRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT 1 ~ 3600.\nDISTANCE TO & FRO 11KM TO 15KM cease on\n= ees SGST [email protected]% 3600.00\n200,000 ‘Need By: 31-MAR-2021 RCM CGST [email protected]% -3600.00\nSupplier tem. D.R.RS.67.16/ TR 27 03 20 RCM SGST [email protected]% -3600.00\ntem Tota: 144000.00\n3 sciTsa HSNISAC: 2000.00 MT 96.00 192000.00\nCC Code Quantity SGST [email protected]% 4800.00\n200.000 Need By: 31-MAR-2021 | RCM CGST [email protected]% 4800.00\n‘Supplier tem: D.R RS.67.AG/LTR 27.03.20 RCM SGST [email protected]% ~4800.00\noo lem Totat: |” 192000.00\n4 sci74s < HSNISAC: 10000.000 MT 56.00 “ —560000.00\nTRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT ae\nDISTANCE TO & FRO 6KM TO 10KM CT ISEESS pictase\ncG.0ode ae Caatiy SGST [email protected]% 4000.00\n10000.000 Need By: 31-MAR-2021 RCM CGST [email protected]% -14000.00\nSupplier tem: O.R. RS 67 18/.TR 27.03.20 ROM SGST [email protected]% -14000.00\n‘tem Tota: $6000.00\n5 _scizg eal HSNISAC: 1500.000 MT 100.00 ~~ 150000.00\nASH TRANSPORTATION TO VARIOUS BRICKS MANUFACTURING PLANT <G25%\nDISTANCE TO & FRO 26KM TO 30KM CeStrmee Same\nCC Code ———"" Quantity SGST [email protected]% 3750.00\n+500 000 Need By: 31-MAR-2021 ROM CGST [email protected]% “3750.00\n‘Supplier Hem D.R.RS 87 16/1 TR 27.03.20 RCM SGST [email protected]% 3750.00\ntem Total: +150000.00\nRegd. Office:\nHINDALCO INDUSTRIES LIMITED\nAhura Centre. 1st Floor, B Wing, Mehakali Caves Road Andhon(East), Mumbai 400083, india, Page 10813\n\nTel No: 91 22 6691 7000. Email: [email protected], Website: werw.adityabiria com\nCorporate identity No: L27020MH1958PLC011238\n\n \n\x0c'</li><li>"_HINDALCO INDUSTRIES LIMITEL\n\n \n\nHIRAKUD POWER, HIRAKUD-766 016,.DIST.-SAMBALPUR (ODISHA) aur —\nPHONE: 6563-2481365, FAX: 0863-2481342 GST Commissionerate -Cuttack\nPURCHASE ORDER\nAMENDMENT\nVendor Code: BS75 P.O/No: P/POISRV/21Z2/0085 Date: 05-MAY-2021\nM/s BGM INFRACORE Revise No; 2 eae eae\n\nOrder Type: PURCHASE ORDER\n\nAT- HOUSE NO.2008 Effective From 01/05/2021 To 31/03/2022\n\nOPPO. VARUN PLAZA\n\nPrice Basis\nAINTHAPALI ROAD .\nSAMBALPUR, ODISHA. IN 768004 . Transportation arrangement :\nShip to Location > HIRAKUD - POWER\nEmail: [email protected] Carrier :\nFax: () Currency > INR\nContact: SIDHARTH SANKAR MISHRA (+91) 8439209679 Hindaico Contact Person :SIDDHARTH KUNDA,\nGSTIN: 21CONPM52S4KiZG © State:27 - Odisha Emait of Contact Person: [email protected]\na\nOrder Unit of Rate/Unit Value\nSi. Stock No. & Description Quantity Measurement {Rs.) {Rs.)\n\n \n\n \n\n \n\n \n \n\nSCOG1S HSN/SAC: 2831.00 141550.00\n\n \n\n \n\n \n\nP.C.C{1:3:6)WITH 40 M.M.DOWN MATERIAL INCLUDING ALL MATERIAL , CGST Tax@9% 42739.50\nSCAFFOLDING BUT EXCLUDING ONLY CEMENT\nn6 Gaak Gentile SGST Tax@o% 12739.50\n50.000 Need By: 31-MAR-2022 item Total: 467029.00\n7 scoogo HSNISAC: 50.000 CUM 114.00 5700.00\nEARTHWORK IN BACKFILLING INCLUDING WATERING AND COMPACTION AT CGST Tax@e% 513.00\nALL ELEVATIONS AROUND FOUNDATIONS, PITS, AND IN PLINTH FILLING AND\nAREA FILLING AS PER SPECIFICATIONS AND DRAWINGS AND AS DIRECTED SGST Tax@9% 513.00\nBY THE ENGINEER WITH SELECTED SPOIL OBTAINED FROM THE STACKS :\nWITHIN A LEAD OF 160 METRES (EACH LAYER OF FILLING TO BE ttem Total: 8726.00\nCOMPACTED TO ACHIEVE THE LEVEL OF COMPACTION REQUIREMENTS AS\nSPECIFIED IN THE SPECIFICATION.)\nCC Code Quantity\n50.000 Need By: 31-MAR-2022\n8 scos72 HSN/SAC: 100.000 CUM 726.00 72600.00\nPROVIDING SAND FILLING IN LAYERS iN FOUNDATION / TRENCHES AND CGST Tax@o% 6534.00\nCOMPACTION BY WATER AND RAMMING BE Sean es34.00\nCC Code Quantity me ’\n400.000 Need By: 31-MAR-2022 item Total: 85668.00\n9 $C1198 HSN/SAC: + 25.000 MTR 75.00 1875.00\nSUPPLYING AND FILLING THE EXPANSION JOINTS WITH 12 MM THERMOCGL CGST Tex@9% 468.75\nSHEET, TOP PORTION (12MM X 12MM) WILL BE FILLED WITH BITUMEN/OTHER ne xeuee\nJOINT SEALING COMPOUND INCLUDING THOROUGH CLEANING OF THE SGST Tax@9 .\nJOINTS, AS PER MANUFACTURER'S SPECIFICATION AND AS DIRECTED BY —_—\nTHE ENGINEER. ttem Totak 2212.50\nCC Code Quantity\n25.000 Need By: 34-MAR-2022\n10 $C1195 HSNISAC: 50.000 CUM 610.00 30500.00\nDISMANTLING AND DEMOLISHING EXISTING STRUCTURES AT ALL LEVELS CGST Tax@9% 2745.00\nINCLUDING STACKING THE MATERIALS AS DIRECTED BY ENGINEER WITHIN A -y assis\nLEAD OF 1 KM. A) BRICK WORK SGST Tax@9% 45.\nCC Code Quantity shane renee re nore lien\ni an Necd By DNSs item Total: 35990.00\n\ni i ll\nRegd. Office: e\nHINDALCO INDUSTRIES LIMITED Pane} of 49\n\nAhura Centre, 1st Floor, B Wing, Mahakali Caves Road Andheri(East), Mumbai 400093, India, age\n\nTe! Na: 91 22 6691 7000, Email: [email protected], Website: www.adityabirla.com\n\nCorporate Identity No: L27020MH1958PLC011238\n\x0c"</li><li>"GSTIN: 21AAACH1201R12Z\nHINDALCO INDUSTRIES LIMITED State Code: 21 - Odisha\n\nGST Range/sivision:\n\n \n\n_ HIRAKUD POWER, HIRAKUD-766 016,DISTSAMBALPUR (ODISHA)\n\n \n\n \n\n: Hl Sambalpur 5\nPHONE: 0663-2481365, FAX: 0663-2481342 GST. Commissionorate ¥cuitack\nPURCHASE ORDER\nVendor Code: U138 P.OINo: P/PO/SRV/2122/0782 Date: 31-DEC-2021\nM/s UTKAL CONSTRUCTION Revise No: Date\n(Order Type: PURCHASE ORDER\nAT-MALIPADA Effective From 01/01/2022 To 3112/2022\nPO-HIRAKUD pee oe\nSAMBALPUR, ORISSA, IN 768 016 Fransportation arrangement\n¥ Ship to Location : HIRAKUD - POWER\nEmail: [email protected] Carrier\nFax: () Currency INR\nContact: ANTARYAM! PANDA (+91) 9861236394 Hindalco Contact Person: SIODHARTH KUNDA,\nGSTIN: 21ANPPP5428H1ZU — State:21 - Odisha Email of Contact Person: sidharth [email protected]\nRef, MANPOWER LIS 7 ia\nOrder Unit of Rate/Unit Value\nSl. Stock No. & Description Quantity Measurement (Rs) (Rs.)\n1 U9tt1 16010026 HSNISAC: 365.000 MDY 4252.44” — 1§52140.60\nPROVIDE: HOUSEKEEPING; TASK TYPE: CLEANING, SCOPE: CLEANING OF Tax 39692.6:\nUNIT 4 POWER HOUSE BUILDING FROM TOP TO BOTTOM INCLUDING SWITCH Sees ween es\nYADR, TOOLS PROVIDED BY: HIL, MATERIAL PROVIDE BY. HIL SGST Tax@9% 139692.65 /\nce Code Quantity SSS oe\nae, Need By: 31-DEC-2022 tem Total ——-1831525.90\nSiento $18 27S. / e\n\nPayment Terms: 15 Days Credit 1831525.90\n\n \n\nBilling Address: Hindalco Industries Ltd, Hirakud Power, Hirakud - 768016, Sambalpur (ODIsHA)\n(Hard copy of bills to be forwarded at shipping address)\nShipping address : Hindalco Industries Ltd, Hirakud Power, Hirakud - 768046, Sambalpur (ODISHA)\n\nNotes\n\n1 THE ABOVE PRICE IS BASED ON ATTACHED LTS CALCULATION\n2. AS PER LTS, WAGES WILL BE PAID FOR NATIONAL HOLIDAYS & FESTIVAL HOLIDAYS, HOWEVER INCASE A LABOUR IS REQUIRED TO PERFORM\n\nBON TOTAL BILL VALUE WILL BE RETAINED BY THE COMPANY EVERY MONTH AND Wit BE RELEASED ANNUALLY ONLY AFTER PAYING\nBONUS & OTHER DUES TO THE LABOURERS ENGAGED 8Y YOU\n\n7.IT IS MANDATORY TO PUT GST NUMBER IN THE INVOICE\n\n& MANPOWER SHOULD BE DEPLOYED AS PER REQUIREMENT OF THE MANAGEMENT\n\n9, LABOUR CESS 1% WILL BE PAID EXTRA, IF APPLICABLE\n\nOTHER TERMS.-\n\n1. JOB SHALL BE EXECUTED AS PER THE SCHEDULE ISSUED BY THE DEPARTMENT (TO BE COLLECTED IN ADVANCE).\n\na IOU/ CONTRACTOR'S SUPERVISOR SHALL MEET THE DEPARTMENTAL SUPERVISOR ce REGULAR INTERVAL DECIDED BY THE DEPARTMENT TO\nCONFIRM CHANGE IN SCHEDULE, IF ANY.\n\n3. YOU SHALL SUBMIT THE DAILY REPORT AFTER COMPLETION OF JOB IN THE PRESCRIBED FORMAT AS DIRECTED BY THE DEPARTMENT.\n\n& ROAXTRA CLAIM TO BE MADE FOR CHANGE IN SCHEDULE FOR SIMILAR TYPE OF JOB\n\nS_RGFUSAL OF ANY JOB SIMILAR TO THE JOB ENLISTED IN THE ORDER OR FAILURE TO EXECUTE THE JOB AS PER SCHEDULE WILL BE LIABLE FOR\n\n \n\nRegd. Office:\nHINDALCO INDUSTRIES LIMITED\n\n‘Ahura Centre, ‘st Floor, B Wing, Mahakali Caves Roag Ancheri(East). Mumbai 400093, incia, Page 1 of 13\nTel No: 91 22 8691 7000, Email: [email protected], Websile: www adityabuta cont\n\nCorporate Identity No: L27020MH1958PLC011238\n\n \n\n \n\nKw\n\x0c"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9977 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Gopal2002/SERVICE_LARGE_MODEL_ZEON")
# Run inference
preds = model("
TOTAL
11
- wl et
SUPERVI
SOR
7 ce
nly
AIN|A ale
Sale
lale ld
So
:
9 wij im
aes 3513
sIB|e
alg
alg
NTN
a 2 3 ; 3
gle
o
ri
")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 1 | 225.8451 | 1106 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 267 |
| 1 | 74 |
| 2 | 85 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0003 | 1 | 0.3001 | - |
| 0.0164 | 50 | 0.2586 | - |
| 0.0328 | 100 | 0.1809 | - |
| 0.0492 | 150 | 0.0534 | - |
| 0.0656 | 200 | 0.0285 | - |
| 0.0820 | 250 | 0.0144 | - |
| 0.0985 | 300 | 0.0045 | - |
| 0.1149 | 350 | 0.0281 | - |
| 0.1313 | 400 | 0.0432 | - |
| 0.1477 | 450 | 0.0045 | - |
| 0.1641 | 500 | 0.0023 | - |
| 0.1805 | 550 | 0.0022 | - |
| 0.1969 | 600 | 0.0011 | - |
| 0.2133 | 650 | 0.0008 | - |
| 0.2297 | 700 | 0.0226 | - |
| 0.2461 | 750 | 0.0009 | - |
| 0.2626 | 800 | 0.0008 | - |
| 0.2790 | 850 | 0.001 | - |
| 0.2954 | 900 | 0.001 | - |
| 0.3118 | 950 | 0.001 | - |
| 0.3282 | 1000 | 0.0007 | - |
| 0.3446 | 1050 | 0.0012 | - |
| 0.3610 | 1100 | 0.0008 | - |
| 0.3774 | 1150 | 0.0008 | - |
| 0.3938 | 1200 | 0.0008 | - |
| 0.4102 | 1250 | 0.0034 | - |
| 0.4266 | 1300 | 0.0007 | - |
| 0.4431 | 1350 | 0.0007 | - |
| 0.4595 | 1400 | 0.0008 | - |
| 0.4759 | 1450 | 0.0007 | - |
| 0.4923 | 1500 | 0.0004 | - |
| 0.5087 | 1550 | 0.0005 | - |
| 0.5251 | 1600 | 0.0007 | - |
| 0.5415 | 1650 | 0.0005 | - |
| 0.5579 | 1700 | 0.0005 | - |
| 0.5743 | 1750 | 0.0004 | - |
| 0.5907 | 1800 | 0.0009 | - |
| 0.6072 | 1850 | 0.0025 | - |
| 0.6236 | 1900 | 0.0003 | - |
| 0.6400 | 1950 | 0.0023 | - |
| 0.6564 | 2000 | 0.0004 | - |
| 0.6728 | 2050 | 0.0045 | - |
| 0.6892 | 2100 | 0.0005 | - |
| 0.7056 | 2150 | 0.0109 | - |
| 0.7220 | 2200 | 0.0003 | - |
| 0.7384 | 2250 | 0.0021 | - |
| 0.7548 | 2300 | 0.0005 | - |
| 0.7713 | 2350 | 0.0004 | - |
| 0.7877 | 2400 | 0.0118 | - |
| 0.8041 | 2450 | 0.0003 | - |
| 0.8205 | 2500 | 0.0003 | - |
| 0.8369 | 2550 | 0.0126 | - |
| 0.8533 | 2600 | 0.0004 | - |
| 0.8697 | 2650 | 0.0162 | - |
| 0.8861 | 2700 | 0.0003 | - |
| 0.9025 | 2750 | 0.0004 | - |
| 0.9189 | 2800 | 0.0005 | - |
| 0.9353 | 2850 | 0.0004 | - |
| 0.9518 | 2900 | 0.0032 | - |
| 0.9682 | 2950 | 0.0003 | - |
| 0.9846 | 3000 | 0.0004 | - |
| 1.0010 | 3050 | 0.0003 | - |
| 1.0174 | 3100 | 0.0003 | - |
| 1.0338 | 3150 | 0.0019 | - |
| 1.0502 | 3200 | 0.0194 | - |
| 1.0666 | 3250 | 0.0003 | - |
| 1.0830 | 3300 | 0.0004 | - |
| 1.0994 | 3350 | 0.01 | - |
| 1.1159 | 3400 | 0.0002 | - |
| 1.1323 | 3450 | 0.0003 | - |
| 1.1487 | 3500 | 0.0004 | - |
| 1.1651 | 3550 | 0.0004 | - |
| 1.1815 | 3600 | 0.0002 | - |
| 1.1979 | 3650 | 0.0005 | - |
| 1.2143 | 3700 | 0.0002 | - |
| 1.2307 | 3750 | 0.0019 | - |
| 1.2471 | 3800 | 0.0003 | - |
| 1.2635 | 3850 | 0.0048 | - |
| 1.2799 | 3900 | 0.013 | - |
| 1.2964 | 3950 | 0.0031 | - |
| 1.3128 | 4000 | 0.0002 | - |
| 1.3292 | 4050 | 0.0024 | - |
| 1.3456 | 4100 | 0.0002 | - |
| 1.3620 | 4150 | 0.0003 | - |
| 1.3784 | 4200 | 0.0003 | - |
| 1.3948 | 4250 | 0.0002 | - |
| 1.4112 | 4300 | 0.003 | - |
| 1.4276 | 4350 | 0.0002 | - |
| 1.4440 | 4400 | 0.0002 | - |
| 1.4605 | 4450 | 0.0022 | - |
| 1.4769 | 4500 | 0.0002 | - |
| 1.4933 | 4550 | 0.0078 | - |
| 1.5097 | 4600 | 0.0027 | - |
| 1.5261 | 4650 | 0.0002 | - |
| 1.5425 | 4700 | 0.0002 | - |
| 1.5589 | 4750 | 0.0002 | - |
| 1.5753 | 4800 | 0.0002 | - |
| 1.5917 | 4850 | 0.0002 | - |
| 1.6081 | 4900 | 0.0118 | - |
| 1.6245 | 4950 | 0.0002 | - |
| 1.6410 | 5000 | 0.0002 | - |
| 1.6574 | 5050 | 0.0003 | - |
| 1.6738 | 5100 | 0.0003 | - |
| 1.6902 | 5150 | 0.0068 | - |
| 1.7066 | 5200 | 0.0003 | - |
| 1.7230 | 5250 | 0.0112 | - |
| 1.7394 | 5300 | 0.0002 | - |
| 1.7558 | 5350 | 0.0002 | - |
| 1.7722 | 5400 | 0.0003 | - |
| 1.7886 | 5450 | 0.0002 | - |
| 1.8051 | 5500 | 0.0002 | - |
| 1.8215 | 5550 | 0.0002 | - |
| 1.8379 | 5600 | 0.0002 | - |
| 1.8543 | 5650 | 0.0003 | - |
| 1.8707 | 5700 | 0.0047 | - |
| 1.8871 | 5750 | 0.0121 | - |
| 1.9035 | 5800 | 0.0003 | - |
| 1.9199 | 5850 | 0.013 | - |
| 1.9363 | 5900 | 0.005 | - |
| 1.9527 | 5950 | 0.0001 | - |
| 1.9691 | 6000 | 0.0002 | - |
| 1.9856 | 6050 | 0.0003 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-small-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "ATED UIA IY OO ATE\nALVA YS TAL AIR,\n\nener oS. : Set eran one\n[fhe 1ST AD CATT\n\nZee cae FF +47 wer: 2.3 te eet wae\nasst : ie ix ® we! {rags\nSeShee Biss Mae 76 ths) a0 art SeRlee pt? Seas\n\nwe Gh caste: Geass seen Ev cy, te 2 ees ergr eo. pe Ueesnee earn guest #2. ps\nui AS, SER REG REM Re S As ee ST es: TTR\n\n \n\nos 4 i <i 28 = peed rd\nwis a, O8.2 Fi 44, we weg Peis BQ Tos Ve Ae ee\n¢ ” z ss\nG2 ly whi ls = ‘ « te\n\n \n\n \n\n \n\nos (sz “ he i z : ‘a att =\ndey ax % Ms z. SoLeY She « «8 wis ‘ Ls\n. Sc oot - a - az : ide\n. ‘ « - o. > a\" .\n7 ee we be a es t~ + we pa bs 4 Sage sh 2\n‘ 2 Fos =a ; ; <\nS| yeu c2a: fe 2 Beye Sy nee one ‘\nns - y A : ™ : ‘ -\nEF < af Gulte ey wat a .\n. et “ 7 4 4%\nos ma ‘os $ ~ Sa me *t i-we *\ncyvey oe / oa it i ' * s : as a. A , ite\nAS “heed oo = Dox eee. a at 02 my, * at eis “ekoee \"33 “x MS ihe ae as Sos\nrit ‘i “ +5 ‘ 4 oy\nos = eve 4. oa OB ak\né % 24 z= ‘ -\n= yey i = ross e , 3 :\n~ = ‘ 4\n2 oe . ‘ wae\ntas t oat i * -\n‘ avi 23 :\nboys Be ; > SH\nCet Anke seats\nbe ‘ ot ee ; .\n»\n“J\n. :\nat\n7\nr ;\n\n \n\ni 7\n\n—A\n\nRTE 1H: -\n\nwie iit\nSia &\n\nfat\nat\ndi\nWy!\ni\n:\n\n>\na hl a pore yee me wenn nn emmnn\n\n2AM Re\n\n“wf,\n\n \n\natak\na. wen\n\n¢\n\nz\nEE OEE EE EE ER we eH TO Oe\n\n=\nsd\n\n.\n\n«\n\n(\n-\nSines Bess pitebia suassorsegteicd.\n\naye\n=r\n\f"}, {"text": " \n \n\nSLNO EQUIPMENT NAM ‘UNIT | EQPTCODE | ‘&SLNO | LOCATION\n\n \n \n\n \n\n \n \n \n\n \n\n \n \n \n\n \n\n \n\n \n\n \n \n\n \n\n \n\n \n\n \n \n \n \n\n \n\n \n \n \n\n \n\n \n\n \n \n \n \n \n \n \n \n \n \n \n \n\n \n\n \n\n[ee CHP#1&2 | 1- sao i cHPreN 01 | Pc ILMS FUROR = TON UECAMIL/FHP/22/1 cna\nRCC AEC TRI puppies | 2 ad PCH- 2 Net ean? al CRUSHER ens | wecnmuremmne | 17-0020\n\neo MONnOST crema | 1-HOIST-7.5T-PCH- = Ree nooronenes | 7 UEC/HIL/FHP/22/3 \"17-01-2022 16-01-2023\n| MONO Cup#182 | 1-HOIST-6T-SCH > ee ‘| Si ecoNDaRy | 6.0 TON | UEC/HIL/FHP/22/4 | 17-01-2022 | 16-01-2023\n\n| Wowonat aca cHP#1a2 | 1-HOIST10T-SCH |\" y ae 5 | crue aOR 10 TON UEC/HIL/FHP/22/5 | 17-01-2022 | 16-01-2023\n| a | MONNOST CHP#3 , 3-SCH-HOIST-7.5T-A | ID ‘snort ooh ance . * TON | UEC/HIL/FHP/22/7 17-01-2022 16-01-2023 |\n| 8 | See om |e -SCH-HOIST-7.5T-A | = Cot oe wdc, Oia Bes a | TS TON | UEC/HIL/FHP/22/8 17-01-2022 16-01-2023\n~~ | MONORAIL ELECTRIC TIDNO:CHP/EMH-07 | SCH, CRUSHER 012022 | cananis\n\n \n \n \n \n \n \n \n \n \n \n\nFLOOR, OVER SCAB | 5.0 TON UEC/HIL/FHP/22/9\n\nSCH, CV402A/B\n\n \n\n \n\n \n\n \n \n \n \n\nL 9 | HOIST CHP#4 -SCH-HOIST-5.0T-B | S|. No: H-971/ 971A\n| | MONORAIL ELECTRIC 4s To ID NO: CHP/EMH-07\n\n \n\n \n \n \n \n \n \n\n \n\nHOIST\n\n10 ee CHP#4 | 4-SCH-HOIST-20T-B gy Ng: H-970 se eran “20 2.0TON | UEC/HIL/FHP/22/10 | 17-01-2022 | 16-01-2023\na See oO a ipODCVA0SAE | _ —-—\n02,\n| 14 | MONORAILELECTRIC | Cupieg | 4-tp2-Hoist-20T DNO-* ee HEAD PULLEY | 20TON | UEC/HIL/FHP/22/11 | 17-01-2022 16-01-2023\n\nL HOIST-2.0T FLOOR\n\nTUNNEL,OVER SUMP\nCHAIN PULLEY BLOCK | CHP #1 NA | aera 1 TON\n| aa T Vv\nCHAIN PULLEY BLOCK | CHP #1 ee Fa\n\n14 CHAIN PULLEY BLOCK | CHP #1 | 10415 i ‘ = 1 TON\n\n- L 7 ; i oe & VFA _\n\n14-01-2023\n\nUEC/HIL/FHP/22/12 | 15-01-2022\n\nUEC/HIL/FHP/22/13 | 15-01-2022\n\n \n \n\n14-01-2023\n\n \n\n|\nUEC/HIL/FHP/22/14 | 15-01-2022 | 14-01-2023\n\n \n\n \n \n\n \n\n \n\f"}, {"text": " \n\nTOTAL\n11\n\n- wl et\n\nSUPERVI\nSOR\n\n7 ce\n\n \n\nnly\nAIN|A ale\nSale\nlale ld\nSo\n\n \n\n \n\n:\n\n9 wij im\n\n \n\n \n\n \n\naes 3513\nsIB|e\nalg\nalg\n\nNTN\n\na 2 3 ; 3\ngle\n\no\n\nri\n\n \n \n\n \n\n \n\f"}, {"text": " \n\nBasic Value Ne arte me °\n\ngee 339980\" i\n\nO |- 4 Jo} ©: :\nRot oct DW. 159.)\n\nBS! < gum v= [AAPG\nPF - estat OE Boy S*\nWISC. DED__ssssssnens\n\nlet Payable 3 ¢) TS a\n\ntees ee\n\n \n\f"}, {"text": "Deepak Singh\n\nFrom: Swapnil Dixit <[email protected]>\n\nSent: 18 August 2021 16:48\n\nTo: Deepak Singh\n\nCe: Shree Nath Mishra; Pranjal Pathak; Prashant Shripad Nagraj; Kirtiraj Jilkar; Pranjal\nPathak; Arun Kumar Singh; Ravi Kumar; Nishant Shah; Vidyanath Jha\n\nSubject: RE: Agenda for next AOH review.\n\n \n\n \n \n\nCAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize\nthe sender and know the content is safe.\n\n \n\nDear Deepak Ji,\n\n“we thankfully acknowledge the receipt of your trailing mail and would like to confirm our acceptance of 4016- &\n322-man days for a period ( Jun to Dec 20 ) and ( Jan to April 21 ) respectively.\n\nRequest to proceed further in the matter and arrange to release the order at the earliest.\nRegards\n\nSwapnil Dixit\n\nFrom: Deepak Singh <[email protected]>\nSent: 18 August 2021 12:45\nTo: Swapnil Dixit <[email protected]>\nCc: shree.mishra <[email protected]>; pranjal.pathak <[email protected]>; Prashant\nShripad Nagraj <[email protected]>; Kirtiraj Jilkar <[email protected]>;\npranjal.pathak <[email protected]>; arun.s <[email protected]>; Ravi Kumar\n\nw= <[email protected]>; Nishant Shah <[email protected]>; Vidyanath Jha\n<[email protected]>\nSubject: RE: Agenda for next AOH review.\n\nCAUTION: This email originated from outside the organisation. Do not click on any links or attachments\n_ unless you recognise the sender and know the content is safe. Forward suspicious mails to Information\n— Security Team.\n\nSwapnil ji;\nKeeping the discussion, we had in the meeting on 09-08-2021,our Team discussed later and following is the point-\n\n1. As per our procedure , we don’t count the day of Antigen Test as a part of Quarantine ,but at the same time\n| agree that Gate Pass processing was taking time beyond 02 days.\n“ So as a special case , for the period Jun 20 to Dec 20 ,we are considering your request of counting the\nAntigen Test day as a part of Quarantine .Hence total Quarantine Days for that period will be 4016 mandays.\n2. For the period Jan 21 to Apr 21,we have streamlined our Gate Pass Process and delivered the Gate Pass in\n02 days .So for the same period ,we are not considering the day of Antigen test as a part of Quarantine .\nVerified Man-days along with Mr. Gaurav of M/S Thermax is 322 Mandays.File is attached.\n\nKindly acknowledge so we proceed further .\n\nRegards\nDeepak\n\f"}], "inference": true, "model-index": [{"name": "SetFit with BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9976525821596244, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,297 |
gaudi/opus-mt-sg-en-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-17T00:15:42Z |
2024-10-18T22:42:20+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-sg-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-sg-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-sg-en --output_dir ./ctranslate2/opus-mt-sg-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-sg-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-sg-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-sg-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-sg-en) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-sg-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-sg-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-sg-en --output_dir ./ctranslate2/opus-mt-sg-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-sg-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-sg-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-sg-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-sg-en) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 45,299 |
yogi/autotrain-amazon_text_sum-730222226
|
yogi
|
text2text-generation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"en",
"dataset:yogi/autotrain-data-amazon_text_sum",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-04-11T10:39:58Z |
2022-04-12T09:08:15+00:00
| 115 | 0 |
---
datasets:
- yogi/autotrain-data-amazon_text_sum
language: en
tags:
- a
- u
- t
- o
- r
- i
- n
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions: 2986.6520132805163
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 730222226
- CO2 Emissions (in grams): 2986.6520132805163
## Validation Metrics
- Loss: 2.682709217071533
- Rouge1: 19.6069
- Rouge2: 7.3367
- RougeL: 19.2706
- RougeLsum: 19.286
- Gen Len: 5.5731
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/yogi/autotrain-amazon_text_sum-730222226
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 730222226
- CO2 Emissions (in grams): 2986.6520132805163
## Validation Metrics
- Loss: 2.682709217071533
- Rouge1: 19.6069
- Rouge2: 7.3367
- RougeL: 19.2706
- RougeLsum: 19.286
- Gen Len: 5.5731
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/yogi/autotrain-amazon_text_sum-730222226
```
|
{"datasets": ["yogi/autotrain-data-amazon_text_sum"], "language": "en", "tags": ["a", "u", "t", "o", "r", "i", "n"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": 2986.6520132805163}
|
task
|
[
"SUMMARIZATION"
] | 45,300 |
FallingPlanet/EmoBERTv2-tiny
|
FallingPlanet
|
text-classification
|
[
"text-classification",
"en",
"license:mit",
"region:us"
] | 2023-12-08T00:23:33Z |
2023-12-08T04:20:59+00:00
| 0 | 0 |
---
language:
- en
license: mit
pipeline_tag: text-classification
---
# EmoBERTv2 Model
This Model Card is a work in progress and will be completed in the future (dataset upload pending, etc)
## Model Description
EmoBERTv2 is a emotion text classification model trained on a large dataset of english social media posts. The model is fine-tuned
from "prajjwal1-bert-tiny" EmoBERTv2 can be used for either further fine-tuning, or for usage in real-time emotion prediction applications
## Datasets
This model was trained on the [Dataset Name] dataset, which is an aggregation of many datasets through relabling and data subsetting. The
dataset has 9 labels: joy, sad, love, anger, disgust, surprise, neutral, fear, and worry
## Training Procedure
EmoBERTv2 was fine-tuned from [Base Model Name] with specific hyperparameters [List Hyperparameters]. Training involved [X] epochs, using a learning rate of [Y].
## Intended Use
This model is intended for emotion classification in [specific domains or general use]. It should be used as a tool for [Specify Applications].
## Performance
EmoBERTv2 demonstrates an accuracy of 86.17% on the [Test Dataset Name]Test set. For detailed performance metrics, refer to [Link to Performance Metrics].
## Bias and Fairness
While efforts have been made to reduce bias, users should be aware of potential biases in the data. It is advisable to test the model in specific contexts.
## Licensing and Usage
EmoBERTv2 is released under the MIT License and can be freely used as outlined in the license.
## Other Model Variations
Additional variations of EmoBERTv2 include [List Variations]. These variations offer different trade-offs in terms of size, speed, and performance.
| null |
Non_BioNLP
|
# EmoBERTv2 Model
This Model Card is a work in progress and will be completed in the future (dataset upload pending, etc)
## Model Description
EmoBERTv2 is a emotion text classification model trained on a large dataset of english social media posts. The model is fine-tuned
from "prajjwal1-bert-tiny" EmoBERTv2 can be used for either further fine-tuning, or for usage in real-time emotion prediction applications
## Datasets
This model was trained on the [Dataset Name] dataset, which is an aggregation of many datasets through relabling and data subsetting. The
dataset has 9 labels: joy, sad, love, anger, disgust, surprise, neutral, fear, and worry
## Training Procedure
EmoBERTv2 was fine-tuned from [Base Model Name] with specific hyperparameters [List Hyperparameters]. Training involved [X] epochs, using a learning rate of [Y].
## Intended Use
This model is intended for emotion classification in [specific domains or general use]. It should be used as a tool for [Specify Applications].
## Performance
EmoBERTv2 demonstrates an accuracy of 86.17% on the [Test Dataset Name]Test set. For detailed performance metrics, refer to [Link to Performance Metrics].
## Bias and Fairness
While efforts have been made to reduce bias, users should be aware of potential biases in the data. It is advisable to test the model in specific contexts.
## Licensing and Usage
EmoBERTv2 is released under the MIT License and can be freely used as outlined in the license.
## Other Model Variations
Additional variations of EmoBERTv2 include [List Variations]. These variations offer different trade-offs in terms of size, speed, and performance.
|
{"language": ["en"], "license": "mit", "pipeline_tag": "text-classification"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,301 |
Dragneel/ticket-classification-v1
|
Dragneel
| null |
[
"safetensors",
"distilbert",
"region:us"
] | 2025-03-10T05:35:20Z |
2025-03-10T05:55:52+00:00
| 66 | 0 |
---
{}
---
language:
- en
license: apache-2.0
tags:
- text-classification
- customer-support
- ticket-classification
- distilbert
datasets:
- custom
metrics:
- accuracy
model-index:
- name: ticket-classification-v1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Custom Ticket Dataset
type: custom
metrics:
- name: Accuracy
type: accuracy
value: 0.9485
---
# Model Card for Dragneel/ticket-classification-v1
This model fine-tunes the DistilBERT base uncased model to classify customer support tickets into four categories. It achieves **94.85% accuracy** on the evaluation dataset.
## Model Details
### Model Description
This model is designed to automatically categorize customer support tickets based on their content. It can classify tickets into the following categories:
- **Billing Question**: Issues related to billing, payments, subscriptions, etc.
- **Feature Request**: Suggestions for new features or improvements
- **General Inquiry**: General questions about products or services
- **Technical Issue**: Technical problems, bugs, errors, etc.
The model uses DistilBERT as its base architecture - a distilled version of BERT that is smaller, faster, and more efficient while retaining good performance.
- **Developed by:** Dragneel
- **Model type:** Text Classification
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
## Uses
### Direct Use
This model can be directly used for:
- Automated ticket routing and prioritization
- Customer support workflow optimization
- Analytics on ticket categories
- Real-time ticket classification
### Out-of-Scope Use
This model should not be used for:
- Processing sensitive customer information without proper privacy measures
- Making final decisions without human review for complex or critical issues
- Classifying tickets in languages other than English
- Categorizing content outside the customer support domain
## Bias, Risks, and Limitations
- The model was trained on a specific dataset and may not generalize well to significantly different customer support contexts
- Performance may degrade for very technical or domain-specific tickets not represented in the training data
- Very short or ambiguous tickets might be misclassified
### Recommendations
Users should review classifications for accuracy, especially for tickets that fall on the boundary between categories. Consider retraining the model on domain-specific data if using in a specialized industry.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline
# Load the model
classifier = pipeline("text-classification", model="Dragneel/ticket-classification-v1")
# Example tickets
tickets = [
"I was charged twice for my subscription this month. Can you help?",
"The app keeps crashing whenever I try to upload a file",
"Would it be possible to add dark mode to the dashboard?",
"What are your business hours?"
]
# Classify tickets
for ticket in tickets:
result = classifier(ticket)
print(f"Ticket: {ticket}")
print(f"Category: {result[0]['label']}")
print(f"Confidence: {result[0]['score']:.4f}")
print()
```
### ID to Label Mapping
```python
id_to_label = {
0: 'Billing Question',
1: 'Feature Request',
2: 'General Inquiry',
3: 'Technical Issue'
}
```
## Training Details
### Training Data
The model was trained on a dataset of customer support tickets that include diverse examples across all four categories. Each ticket typically contains a title and description detailing the customer's issue or request.
### Training Procedure
#### Training Hyperparameters
- **Learning rate:** 0.001
- **Batch size:** 2
- **Epochs:** 10 (with early stopping)
- **Weight decay:** 0.01
- **Early stopping patience:** 2 epochs
- **Optimizer:** AdamW
- **Training regime:** fp32
## Evaluation
### Testing Data, Factors & Metrics
#### Metrics
The model is evaluated using the following metrics:
- Accuracy: Percentage of correctly classified tickets
- Loss: Cross-entropy loss on the evaluation dataset
### Results
The model achieved the following metrics on the evaluation dataset:
| Metric | Value |
|--------|-------|
| Accuracy | 94.85% |
| Loss | 0.248 |
| Runtime | 16.01s |
| Samples/second | 23.05 |
## Technical Specifications
### Model Architecture and Objective
The model architecture is based on DistilBERT, a distilled version of BERT. It consists of the base DistilBERT model with a classification head layer on top. The model was fine-tuned using cross-entropy loss to predict the correct category for each ticket.
## Model Card Contact
For inquiries about this model, please open an issue on the model repository.
```
| null |
Non_BioNLP
|
language:
- en
license: apache-2.0
tags:
- text-classification
- customer-support
- ticket-classification
- distilbert
datasets:
- custom
metrics:
- accuracy
model-index:
- name: ticket-classification-v1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Custom Ticket Dataset
type: custom
metrics:
- name: Accuracy
type: accuracy
value: 0.9485
---
# Model Card for Dragneel/ticket-classification-v1
This model fine-tunes the DistilBERT base uncased model to classify customer support tickets into four categories. It achieves **94.85% accuracy** on the evaluation dataset.
## Model Details
### Model Description
This model is designed to automatically categorize customer support tickets based on their content. It can classify tickets into the following categories:
- **Billing Question**: Issues related to billing, payments, subscriptions, etc.
- **Feature Request**: Suggestions for new features or improvements
- **General Inquiry**: General questions about products or services
- **Technical Issue**: Technical problems, bugs, errors, etc.
The model uses DistilBERT as its base architecture - a distilled version of BERT that is smaller, faster, and more efficient while retaining good performance.
- **Developed by:** Dragneel
- **Model type:** Text Classification
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
## Uses
### Direct Use
This model can be directly used for:
- Automated ticket routing and prioritization
- Customer support workflow optimization
- Analytics on ticket categories
- Real-time ticket classification
### Out-of-Scope Use
This model should not be used for:
- Processing sensitive customer information without proper privacy measures
- Making final decisions without human review for complex or critical issues
- Classifying tickets in languages other than English
- Categorizing content outside the customer support domain
## Bias, Risks, and Limitations
- The model was trained on a specific dataset and may not generalize well to significantly different customer support contexts
- Performance may degrade for very technical or domain-specific tickets not represented in the training data
- Very short or ambiguous tickets might be misclassified
### Recommendations
Users should review classifications for accuracy, especially for tickets that fall on the boundary between categories. Consider retraining the model on domain-specific data if using in a specialized industry.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline
# Load the model
classifier = pipeline("text-classification", model="Dragneel/ticket-classification-v1")
# Example tickets
tickets = [
"I was charged twice for my subscription this month. Can you help?",
"The app keeps crashing whenever I try to upload a file",
"Would it be possible to add dark mode to the dashboard?",
"What are your business hours?"
]
# Classify tickets
for ticket in tickets:
result = classifier(ticket)
print(f"Ticket: {ticket}")
print(f"Category: {result[0]['label']}")
print(f"Confidence: {result[0]['score']:.4f}")
print()
```
### ID to Label Mapping
```python
id_to_label = {
0: 'Billing Question',
1: 'Feature Request',
2: 'General Inquiry',
3: 'Technical Issue'
}
```
## Training Details
### Training Data
The model was trained on a dataset of customer support tickets that include diverse examples across all four categories. Each ticket typically contains a title and description detailing the customer's issue or request.
### Training Procedure
#### Training Hyperparameters
- **Learning rate:** 0.001
- **Batch size:** 2
- **Epochs:** 10 (with early stopping)
- **Weight decay:** 0.01
- **Early stopping patience:** 2 epochs
- **Optimizer:** AdamW
- **Training regime:** fp32
## Evaluation
### Testing Data, Factors & Metrics
#### Metrics
The model is evaluated using the following metrics:
- Accuracy: Percentage of correctly classified tickets
- Loss: Cross-entropy loss on the evaluation dataset
### Results
The model achieved the following metrics on the evaluation dataset:
| Metric | Value |
|--------|-------|
| Accuracy | 94.85% |
| Loss | 0.248 |
| Runtime | 16.01s |
| Samples/second | 23.05 |
## Technical Specifications
### Model Architecture and Objective
The model architecture is based on DistilBERT, a distilled version of BERT. It consists of the base DistilBERT model with a classification head layer on top. The model was fine-tuned using cross-entropy loss to predict the correct category for each ticket.
## Model Card Contact
For inquiries about this model, please open an issue on the model repository.
```
|
{}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,302 |
gaudi/opus-mt-cy-en-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-17T00:08:58Z |
2024-10-18T22:01:02+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-cy-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-cy-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-cy-en --output_dir ./ctranslate2/opus-mt-cy-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-cy-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-cy-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-cy-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-cy-en) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-cy-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-cy-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-cy-en --output_dir ./ctranslate2/opus-mt-cy-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-cy-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-cy-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-cy-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-cy-en) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 45,303 |
Iker/Neurona-2b
|
Iker
|
text-generation
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"synthetic",
"conversational",
"es",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:Iker/OpenHermes-2.5-Spanish",
"dataset:projecte-aina/RAG_Multilingual",
"dataset:Iker/Document-Translation-en-es",
"dataset:Iker/InstructTranslation-EN-ES",
"dataset:Helsinki-NLP/opus-100",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaiveai/glaive-function-calling-v2",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-13T08:28:28Z |
2024-05-13T09:45:12+00:00
| 28 | 1 |
---
base_model: google/gemma-2b
datasets:
- teknium/OpenHermes-2.5
- Iker/OpenHermes-2.5-Spanish
- projecte-aina/RAG_Multilingual
- Iker/Document-Translation-en-es
- Iker/InstructTranslation-EN-ES
- Helsinki-NLP/opus-100
- glaiveai/glaive-code-assistant-v3
- glaiveai/glaive-function-calling-v2
language:
- es
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- synthetic
---

# Neurona 2B Beta: Un Modelo de Lenguage en Español
> Esta es una versión preliminar del dataset card. El modelo está en desarrollo y no es la versión final. Si quieres saber más sobre este modelo, escribe a [email protected]
Neurona 2B es un modelo de lenguaje en Español. Esta es la primera iteración y un experimento para poner a punto los scripts y la infraestructura.
Neurona 2B ha sido entrenado con los siguiente datasets
- [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5)
- [Iker/OpenHermes-2.5-Spanish](https://huggingface.co/datasets/Iker/OpenHermes-2.5-Spanish)
- [Iker/Document-Translation-en-es](https://huggingface.co/datasets/Iker/Document-Translation-en-es)
- [Iker/InstructTranslation-EN-ES](https://huggingface.co/datasets/Iker/InstructTranslation-EN-ES)
- [Helsinki-NLP/opus-100 (en-es, only a few examples to reach 1 million instructions)](https://huggingface.co/datasets/Helsinki-NLP/opus-100)
- [projecte-aina/RAG_Multilingual(es only, 3701 examples)](https://huggingface.co/datasets/projecte-aina/RAG_Multilingual)
- [glaiveai/glaive-code-assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3)
- [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
Esta mezcla de datasets en Inglés y Español, permite al modelo adquirir diferentes capacidades, como RAG, function calling, code assistant, question answering, summarization... tanto en Inglés como en Español.
# Entrenamiento
Este modelo se ha entrado usando 4xNvidia A100 80Gb y axolotl
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Esta es la configuración usada
```yaml
base_model: google/gemma-2b
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
is_falcon_derived_model:
is_llama_derived_model:
is_qwen_derived_model:
is_mistral_derived_model:
load_in_8bit: false
load_in_4bit: false
strict: false
device_map: null
datasets:
- path: /ikerlariak/igarcia945/Mortadelo-Filemon/final_dataset/OpenHermes-2.5-Spanish_fix_gpt.jsonl
type: sharegpt
conversation: chatml
field: conversations
roles:
input:
- system
- gpt
output:
- human
- path: /ikerlariak/igarcia945/Mortadelo-Filemon/final_dataset/OpenHermes-2.5-English.jsonl
type: sharegpt
conversation: chatml
field: conversations
- path: /ikerlariak/igarcia945/Mortadelo-Filemon/final_dataset/glaive-function-calling-v2.jsonl
type: sharegpt
conversation: chatml
field: conversations
roles:
input:
- system
- gpt
- tool
output:
- human
- path: /ikerlariak/igarcia945/Mortadelo-Filemon/final_dataset/glaive-code-assistant-v3-small.jsonl
type: sharegpt
conversation: chatml
field: conversations
roles:
input:
- system
- gpt
output:
- human
chat_template: chatml
dataset_prepared_path: /ikerlariak/igarcia945/Mortadelo-Filemon/gemma-2b-spanish/dataset
shuffle_merged_datasets: true
val_set_size: 0.005
output_dir: /ikerlariak/igarcia945/Mortadelo-Filemon/gemma-2b-spanish/
adapter:
lora_model_dir:
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: false
special_tokens:
bos_token: "<|im_start|>"
eos_token: "<|im_end|>"
pad_token: "<|end_of_text|>"
tokens:
- "<|begin_of_text|>"
- "<|end_of_text|>"
- "<|im_start|>"
- "<|im_end|>"
- "<|start_header_id|>"
- "<|end_header_id|>"
- "<tool_call>"
- "<tool_response>"
- "<tools>"
- "</tool_call>"
- "</tool_response>"
- "</tools>"
- "<reserved1>"
- "<reserved2>"
- "<reserved3>"
- "<reserved4>"
neftune_noise_alpha: 5
wandb_project: Mortadelo&Filemon
wandb_entity: igarciaf
wandb_watch:
wandb_name: gemma2b
wandb_log_model:
gradient_accumulation_steps: 32
micro_batch_size: 2
eval_batch_size: 2
num_epochs: 3
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 0.00007
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.03
evals_per_epoch: 4
eval_table_size:
save_strategy: "no"
debug:
deepspeed: /ikerlariak/igarcia945/Mortadelo-Filemon/train_configs/deepspeed_zero3.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
seed: 33
```
| null |
Non_BioNLP
|

# Neurona 2B Beta: Un Modelo de Lenguage en Español
> Esta es una versión preliminar del dataset card. El modelo está en desarrollo y no es la versión final. Si quieres saber más sobre este modelo, escribe a [email protected]
Neurona 2B es un modelo de lenguaje en Español. Esta es la primera iteración y un experimento para poner a punto los scripts y la infraestructura.
Neurona 2B ha sido entrenado con los siguiente datasets
- [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5)
- [Iker/OpenHermes-2.5-Spanish](https://huggingface.co/datasets/Iker/OpenHermes-2.5-Spanish)
- [Iker/Document-Translation-en-es](https://huggingface.co/datasets/Iker/Document-Translation-en-es)
- [Iker/InstructTranslation-EN-ES](https://huggingface.co/datasets/Iker/InstructTranslation-EN-ES)
- [Helsinki-NLP/opus-100 (en-es, only a few examples to reach 1 million instructions)](https://huggingface.co/datasets/Helsinki-NLP/opus-100)
- [projecte-aina/RAG_Multilingual(es only, 3701 examples)](https://huggingface.co/datasets/projecte-aina/RAG_Multilingual)
- [glaiveai/glaive-code-assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3)
- [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
Esta mezcla de datasets en Inglés y Español, permite al modelo adquirir diferentes capacidades, como RAG, function calling, code assistant, question answering, summarization... tanto en Inglés como en Español.
# Entrenamiento
Este modelo se ha entrado usando 4xNvidia A100 80Gb y axolotl
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Esta es la configuración usada
```yaml
base_model: google/gemma-2b
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
is_falcon_derived_model:
is_llama_derived_model:
is_qwen_derived_model:
is_mistral_derived_model:
load_in_8bit: false
load_in_4bit: false
strict: false
device_map: null
datasets:
- path: /ikerlariak/igarcia945/Mortadelo-Filemon/final_dataset/OpenHermes-2.5-Spanish_fix_gpt.jsonl
type: sharegpt
conversation: chatml
field: conversations
roles:
input:
- system
- gpt
output:
- human
- path: /ikerlariak/igarcia945/Mortadelo-Filemon/final_dataset/OpenHermes-2.5-English.jsonl
type: sharegpt
conversation: chatml
field: conversations
- path: /ikerlariak/igarcia945/Mortadelo-Filemon/final_dataset/glaive-function-calling-v2.jsonl
type: sharegpt
conversation: chatml
field: conversations
roles:
input:
- system
- gpt
- tool
output:
- human
- path: /ikerlariak/igarcia945/Mortadelo-Filemon/final_dataset/glaive-code-assistant-v3-small.jsonl
type: sharegpt
conversation: chatml
field: conversations
roles:
input:
- system
- gpt
output:
- human
chat_template: chatml
dataset_prepared_path: /ikerlariak/igarcia945/Mortadelo-Filemon/gemma-2b-spanish/dataset
shuffle_merged_datasets: true
val_set_size: 0.005
output_dir: /ikerlariak/igarcia945/Mortadelo-Filemon/gemma-2b-spanish/
adapter:
lora_model_dir:
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: false
special_tokens:
bos_token: "<|im_start|>"
eos_token: "<|im_end|>"
pad_token: "<|end_of_text|>"
tokens:
- "<|begin_of_text|>"
- "<|end_of_text|>"
- "<|im_start|>"
- "<|im_end|>"
- "<|start_header_id|>"
- "<|end_header_id|>"
- "<tool_call>"
- "<tool_response>"
- "<tools>"
- "</tool_call>"
- "</tool_response>"
- "</tools>"
- "<reserved1>"
- "<reserved2>"
- "<reserved3>"
- "<reserved4>"
neftune_noise_alpha: 5
wandb_project: Mortadelo&Filemon
wandb_entity: igarciaf
wandb_watch:
wandb_name: gemma2b
wandb_log_model:
gradient_accumulation_steps: 32
micro_batch_size: 2
eval_batch_size: 2
num_epochs: 3
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 0.00007
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.03
evals_per_epoch: 4
eval_table_size:
save_strategy: "no"
debug:
deepspeed: /ikerlariak/igarcia945/Mortadelo-Filemon/train_configs/deepspeed_zero3.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
seed: 33
```
|
{"base_model": "google/gemma-2b", "datasets": ["teknium/OpenHermes-2.5", "Iker/OpenHermes-2.5-Spanish", "projecte-aina/RAG_Multilingual", "Iker/Document-Translation-en-es", "Iker/InstructTranslation-EN-ES", "Helsinki-NLP/opus-100", "glaiveai/glaive-code-assistant-v3", "glaiveai/glaive-function-calling-v2"], "language": ["es", "en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["synthetic"]}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | 45,304 |
Graphcore/gptj-mnli
|
Graphcore
|
text-generation
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"text-classification",
"en",
"dataset:glue",
"arxiv:1910.10683",
"arxiv:2104.09864",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-22T10:35:28Z |
2022-08-25T11:39:23+00:00
| 0 | 1 |
---
datasets:
- glue
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- pytorch
- causal-lm
- text-classification
- text-generation
widget:
- text: 'mnli hypothesis: Your contributions were of no help with our students'' education.
premise: Your contribution helped make it possible for us to provide our students
with a quality education. target:'
model-index:
- name: Graphcore/gptj-mnli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MNLI
type: glue
split: validation_mismatched
args: mnli
metrics:
- type: accuracy
value: 0.825
name: Accuracy
config: mnli_mismatched
---
# Graphcore/gptj-mnli
This model is the fine-tuned version of [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on the [GLUE MNLI dataset](https://huggingface.co/datasets/glue#mnli).
MNLI dataset consists of pairs of sentences, a *premise* and a *hypothesis*.
The task is to predict the relation between the premise and the hypothesis, which can be:
- `entailment`: hypothesis follows from the premise,
- `contradiction`: hypothesis contradicts the premise,
- `neutral`: hypothesis and premise are unrelated.
We finetune the model as a Causal Language Model (CLM): given a sequence of tokens, the task is to predict the next token.
To achieve this, we create a stylised prompt string, following the approach of [T5 paper](https://arxiv.org/pdf/1910.10683.pdf).
```shell
mnli hypothesis: {hypothesis} premise: {premise} target: {class_label} <|endoftext|>
```
For example:
```
mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target: contradiction <|endoftext|>
```
## Model description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
[EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B), our starting point for finetuning, is trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Fine-tuning and validation data
Fine tuning is done using the `train` split of the GLUE MNLI dataset and the performance is measured using the [validation_mismatched](https://huggingface.co/datasets/glue#mnli_mismatched) split.
`validation_mismatched` means validation examples are not derived from the same sources as those in the training set and therefore not closely resembling any of the examples seen at training time.
Data splits for the mnli dataset are the following
|train |validation_matched|validation_mismatched|
|-----:|-----------------:|--------------------:|
|392702| 9815| 9832|
## Fine-tuning procedure
Fine tuned on a Graphcore IPU-POD64 using `popxl`.
Prompt sentences are tokenized and packed together to form 1024 token sequences, following [HF packing algorithm](https://github.com/huggingface/transformers/blob/v4.20.1/examples/pytorch/language-modeling/run_clm.py). No padding is used.
The packing process works in groups of 1000 examples and discards any remainder from each group that isn't a whole sequence.
For the 392,702 training examples this gives a total of 17,762 sequences per epoch.
Since the model is trained to predict the next token, labels are simply the input sequence shifted by one token.
Given the training format, no extra care is needed to account for different sequences: the model does not need to know which sentence a token belongs to.
### Hyperparameters:
- optimiser: AdamW (beta1: 0.9, beta2: 0.999, eps: 1e-6, weight decay: 0.0, learning rate: 5e-6)
- learning rate schedule: warmup schedule (min: 1e-7, max: 5e-6, warmup proportion: 0.005995)
- batch size: 128
- training steps: 300. Each epoch consists of ceil(17,762/128) steps, hence 300 steps are approximately 2 epochs.
## Performance
The resulting model matches SOTA performance with 82.5% accuracy.
```
Total number of examples 9832
Number with badly formed result 0
Number with incorrect result 1725
Number with correct result 8107
[82.5%]
example 0 = {'prompt_text': "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target:", 'class_label': 'contradiction'}
result = {'generated_text': ' contradiction'}
First 10 generated_text and expected class_label results:
0: 'contradiction' contradiction
1: 'contradiction' contradiction
2: 'entailment' entailment
3: 'contradiction' contradiction
4: 'entailment' entailment
5: 'entailment' entailment
6: 'contradiction' contradiction
7: 'contradiction' contradiction
8: 'entailment' neutral
9: 'contradiction' contradiction
```
## How to use
The model can be easily loaded using AutoModelForCausalLM.
You can use the pipeline API for text generation.
```python
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-j-6B')
hf_model = AutoModelForCausalLM.from_pretrained("Graphcore/gptj-mnli", pad_token_id=tokenizer.eos_token_id)
generator = pipeline('text-generation', model=hf_model, tokenizer=tokenizer)
prompt = "mnli hypothesis: Your contributions were of no help with our students' education." \
"premise: Your contribution helped make it possible for us to provide our students with a quality education. target:"
out = generator(prompt, return_full_text=False, max_new_tokens=5, top_k=1)
# [{'generated_text': ' contradiction'}]
```
You can create prompt-like inputs starting from GLUE MNLI dataset using functions provided in the `data_utils.py` script.
```python
from datasets import load_dataset
from data_utils import form_text, split_text
dataset = load_dataset('glue', 'mnli', split='validation_mismatched')
dataset = dataset.map(
form_text, remove_columns=['hypothesis', 'premise','label', 'idx'])
# dataset[0] {'text': "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target: contradiction<|endoftext|>"}
dataset = dataset.map(split_text, remove_columns=['text'])
# dataset[0] {'prompt_text': "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target:",
# 'class_label': 'contradiction'}
```
| null |
Non_BioNLP
|
# Graphcore/gptj-mnli
This model is the fine-tuned version of [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on the [GLUE MNLI dataset](https://huggingface.co/datasets/glue#mnli).
MNLI dataset consists of pairs of sentences, a *premise* and a *hypothesis*.
The task is to predict the relation between the premise and the hypothesis, which can be:
- `entailment`: hypothesis follows from the premise,
- `contradiction`: hypothesis contradicts the premise,
- `neutral`: hypothesis and premise are unrelated.
We finetune the model as a Causal Language Model (CLM): given a sequence of tokens, the task is to predict the next token.
To achieve this, we create a stylised prompt string, following the approach of [T5 paper](https://arxiv.org/pdf/1910.10683.pdf).
```shell
mnli hypothesis: {hypothesis} premise: {premise} target: {class_label} <|endoftext|>
```
For example:
```
mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target: contradiction <|endoftext|>
```
## Model description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
[EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B), our starting point for finetuning, is trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Fine-tuning and validation data
Fine tuning is done using the `train` split of the GLUE MNLI dataset and the performance is measured using the [validation_mismatched](https://huggingface.co/datasets/glue#mnli_mismatched) split.
`validation_mismatched` means validation examples are not derived from the same sources as those in the training set and therefore not closely resembling any of the examples seen at training time.
Data splits for the mnli dataset are the following
|train |validation_matched|validation_mismatched|
|-----:|-----------------:|--------------------:|
|392702| 9815| 9832|
## Fine-tuning procedure
Fine tuned on a Graphcore IPU-POD64 using `popxl`.
Prompt sentences are tokenized and packed together to form 1024 token sequences, following [HF packing algorithm](https://github.com/huggingface/transformers/blob/v4.20.1/examples/pytorch/language-modeling/run_clm.py). No padding is used.
The packing process works in groups of 1000 examples and discards any remainder from each group that isn't a whole sequence.
For the 392,702 training examples this gives a total of 17,762 sequences per epoch.
Since the model is trained to predict the next token, labels are simply the input sequence shifted by one token.
Given the training format, no extra care is needed to account for different sequences: the model does not need to know which sentence a token belongs to.
### Hyperparameters:
- optimiser: AdamW (beta1: 0.9, beta2: 0.999, eps: 1e-6, weight decay: 0.0, learning rate: 5e-6)
- learning rate schedule: warmup schedule (min: 1e-7, max: 5e-6, warmup proportion: 0.005995)
- batch size: 128
- training steps: 300. Each epoch consists of ceil(17,762/128) steps, hence 300 steps are approximately 2 epochs.
## Performance
The resulting model matches SOTA performance with 82.5% accuracy.
```
Total number of examples 9832
Number with badly formed result 0
Number with incorrect result 1725
Number with correct result 8107
[82.5%]
example 0 = {'prompt_text': "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target:", 'class_label': 'contradiction'}
result = {'generated_text': ' contradiction'}
First 10 generated_text and expected class_label results:
0: 'contradiction' contradiction
1: 'contradiction' contradiction
2: 'entailment' entailment
3: 'contradiction' contradiction
4: 'entailment' entailment
5: 'entailment' entailment
6: 'contradiction' contradiction
7: 'contradiction' contradiction
8: 'entailment' neutral
9: 'contradiction' contradiction
```
## How to use
The model can be easily loaded using AutoModelForCausalLM.
You can use the pipeline API for text generation.
```python
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-j-6B')
hf_model = AutoModelForCausalLM.from_pretrained("Graphcore/gptj-mnli", pad_token_id=tokenizer.eos_token_id)
generator = pipeline('text-generation', model=hf_model, tokenizer=tokenizer)
prompt = "mnli hypothesis: Your contributions were of no help with our students' education." \
"premise: Your contribution helped make it possible for us to provide our students with a quality education. target:"
out = generator(prompt, return_full_text=False, max_new_tokens=5, top_k=1)
# [{'generated_text': ' contradiction'}]
```
You can create prompt-like inputs starting from GLUE MNLI dataset using functions provided in the `data_utils.py` script.
```python
from datasets import load_dataset
from data_utils import form_text, split_text
dataset = load_dataset('glue', 'mnli', split='validation_mismatched')
dataset = dataset.map(
form_text, remove_columns=['hypothesis', 'premise','label', 'idx'])
# dataset[0] {'text': "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target: contradiction<|endoftext|>"}
dataset = dataset.map(split_text, remove_columns=['text'])
# dataset[0] {'prompt_text': "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target:",
# 'class_label': 'contradiction'}
```
|
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["pytorch", "causal-lm", "text-classification", "text-generation"], "widget": [{"text": "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target:"}], "model-index": [{"name": "Graphcore/gptj-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "split": "validation_mismatched", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.825, "name": "Accuracy", "config": "mnli_mismatched"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,305 |
captainjeff/test_trainer
|
captainjeff
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-14T18:43:59Z |
2023-06-14T18:44:40+00:00
| 9 | 0 |
---
datasets:
- yelp_review_full
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: test_trainer
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- type: accuracy
value: 0.588
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0324
- Accuracy: 0.588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.1703 | 0.493 |
| No log | 2.0 | 250 | 1.0098 | 0.584 |
| No log | 3.0 | 375 | 1.0324 | 0.588 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0324
- Accuracy: 0.588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.1703 | 0.493 |
| No log | 2.0 | 250 | 1.0098 | 0.584 |
| No log | 3.0 | 375 | 1.0324 | 0.588 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"datasets": ["yelp_review_full"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "test_trainer", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "yelp_review_full", "type": "yelp_review_full", "config": "yelp_review_full", "split": "test", "args": "yelp_review_full"}, "metrics": [{"type": "accuracy", "value": 0.588, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,306 |
Mozilla/distilbart-cnn-12-6
|
Mozilla
|
summarization
|
[
"transformers.js",
"onnx",
"bart",
"text2text-generation",
"summarization",
"base_model:sshleifer/distilbart-cnn-12-6",
"base_model:quantized:sshleifer/distilbart-cnn-12-6",
"region:us"
] | 2025-01-08T01:35:57Z |
2025-01-08T15:00:02+00:00
| 82 | 2 |
---
base_model: sshleifer/distilbart-cnn-12-6
library_name: transformers.js
pipeline_tag: summarization
---
https://huggingface.co/sshleifer/distilbart-cnn-12-6 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
| null |
Non_BioNLP
| ERROR: type should be string, got "\nhttps://huggingface.co/sshleifer/distilbart-cnn-12-6 with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)." |
{"base_model": "sshleifer/distilbart-cnn-12-6", "library_name": "transformers.js", "pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 45,307 |
deepachalapathi/parasci3_1
|
deepachalapathi
|
text-classification
|
[
"setfit",
"pytorch",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-TinyBERT-L6-v2",
"base_model:finetune:sentence-transformers/paraphrase-TinyBERT-L6-v2",
"region:us"
] | 2023-12-28T08:23:08Z |
2023-12-28T08:23:35+00:00
| 55 | 0 |
---
base_model: sentence-transformers/paraphrase-TinyBERT-L6-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: part-of-speech ( pos ) tagging is a fundamental language analysis task---part-of-speech
( pos ) tagging is a fundamental nlp task , used by a wide variety of applications
- text: the two baseline methods were implemented using scikit-learn in python---the
models were implemented using scikit-learn module
- text: semantic parsing is the task of converting a sentence into a representation
of its meaning , usually in a logical form grounded in the symbols of some fixed
ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11
)---for this language model , we built a trigram language model with kneser-ney
smoothing using srilm from the same automatically segmented corpus
- text: the results show that our model can clearly outperform the baselines in terms
of three evaluation metrics---for the extractive or abstractive summaries , we
use rouge scores , a metric used to evaluate automatic summarization performance
, to measure the pairwise agreement of summaries from different annotators
- text: language models were built with srilm , modified kneser-ney smoothing , default
pruning , and order 5---the language model used was a 5-gram with modified kneserney
smoothing , built with srilm toolkit
inference: true
---
# SetFit with sentence-transformers/paraphrase-TinyBERT-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-TinyBERT-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-TinyBERT-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-TinyBERT-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-TinyBERT-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'the defacto standard metric in machine translation is bleu---from character representations , we propose to generate vector representations of entire tweets from characters in our tweet2vec model'</li><li>'arabic is a highly inflectional language with 85 % of words derived from trilateral roots ( alfedaghi and al-anzi 1989 )---chen et al derive bilingual subtree constraints with auto-parsed source-language sentences'</li><li>'labeling of sentence boundaries is a necessary prerequisite for many natural language processing tasks , including part-of-speech tagging and sentence alignment---we have proposed a model for video description which uses neural networks for the entire pipeline from pixels to sentences'</li></ul> |
| 1 | <ul><li>'in this paper , we present a comprehensive analysis of the relationship between personal traits and brand preferences---in previous research , in this study , we want to systematically investigate the relationship between a comprehensive set of personal traits and brand preferences'</li><li>'the 50-dimensional pre-trained word embeddings are provided by glove , which are fixed during our model training---we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training'</li><li>'we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("whateverweird17/parasci3_1")
# Run inference
preds = model("the two baseline methods were implemented using scikit-learn in python---the models were implemented using scikit-learn module")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 27 | 35.8125 | 54 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (10, 10)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.1715 | - |
| 1.25 | 50 | 0.0028 | - |
| 2.5 | 100 | 0.0005 | - |
| 3.75 | 150 | 0.0002 | - |
| 5.0 | 200 | 0.0003 | - |
| 6.25 | 250 | 0.0001 | - |
| 7.5 | 300 | 0.0002 | - |
| 8.75 | 350 | 0.0001 | - |
| 10.0 | 400 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.33.0
- PyTorch: 2.0.0
- Datasets: 2.16.0
- Tokenizers: 0.13.3
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-TinyBERT-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-TinyBERT-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-TinyBERT-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-TinyBERT-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-TinyBERT-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'the defacto standard metric in machine translation is bleu---from character representations , we propose to generate vector representations of entire tweets from characters in our tweet2vec model'</li><li>'arabic is a highly inflectional language with 85 % of words derived from trilateral roots ( alfedaghi and al-anzi 1989 )---chen et al derive bilingual subtree constraints with auto-parsed source-language sentences'</li><li>'labeling of sentence boundaries is a necessary prerequisite for many natural language processing tasks , including part-of-speech tagging and sentence alignment---we have proposed a model for video description which uses neural networks for the entire pipeline from pixels to sentences'</li></ul> |
| 1 | <ul><li>'in this paper , we present a comprehensive analysis of the relationship between personal traits and brand preferences---in previous research , in this study , we want to systematically investigate the relationship between a comprehensive set of personal traits and brand preferences'</li><li>'the 50-dimensional pre-trained word embeddings are provided by glove , which are fixed during our model training---we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training'</li><li>'we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("whateverweird17/parasci3_1")
# Run inference
preds = model("the two baseline methods were implemented using scikit-learn in python---the models were implemented using scikit-learn module")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 27 | 35.8125 | 54 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (10, 10)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.1715 | - |
| 1.25 | 50 | 0.0028 | - |
| 2.5 | 100 | 0.0005 | - |
| 3.75 | 150 | 0.0002 | - |
| 5.0 | 200 | 0.0003 | - |
| 6.25 | 250 | 0.0001 | - |
| 7.5 | 300 | 0.0002 | - |
| 8.75 | 350 | 0.0001 | - |
| 10.0 | 400 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.33.0
- PyTorch: 2.0.0
- Datasets: 2.16.0
- Tokenizers: 0.13.3
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-TinyBERT-L6-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "part-of-speech ( pos ) tagging is a fundamental language analysis task---part-of-speech ( pos ) tagging is a fundamental nlp task , used by a wide variety of applications"}, {"text": "the two baseline methods were implemented using scikit-learn in python---the models were implemented using scikit-learn module"}, {"text": "semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus"}, {"text": "the results show that our model can clearly outperform the baselines in terms of three evaluation metrics---for the extractive or abstractive summaries , we use rouge scores , a metric used to evaluate automatic summarization performance , to measure the pairwise agreement of summaries from different annotators"}, {"text": "language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5---the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit"}], "inference": true}
|
task
|
[
"TEXT_CLASSIFICATION",
"TRANSLATION",
"SUMMARIZATION"
] | 45,308 |
prasenjeet099/zllm2
|
prasenjeet099
|
zero-shot-classification
|
[
"zbrain",
"text-classification",
"pytorch",
"tensorflow",
"zero-shot-classification",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:1911.02116",
"license:mit",
"region:us"
] | 2025-03-10T03:51:02Z |
2025-03-10T16:41:29+00:00
| 26 | 0 |
---
datasets:
- multi_nli
- xnli
language:
- multilingual
- en
- fr
- es
- de
- el
- bg
- ru
- tr
- ar
- vi
- th
- zh
- hi
- sw
- ur
license: mit
pipeline_tag: zero-shot-classification
tags:
- text-classification
- pytorch
- tensorflow
widget:
- text: За кого вы голосуете в 2020 году?
candidate_labels: politique étrangère, Europe, élections, affaires, politique
multi_class: true
- text: لمن تصوت في 2020؟
candidate_labels: السياسة الخارجية, أوروبا, الانتخابات, الأعمال, السياسة
multi_class: true
- text: 2020'de kime oy vereceksiniz?
candidate_labels: dış politika, Avrupa, seçimler, ticaret, siyaset
multi_class: true
---
# xlm-roberta-large-xnli
## Model Description
This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline).
## Intended Usage
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
- English
- French
- Spanish
- German
- Greek
- Bulgarian
- Russian
- Turkish
- Arabic
- Vietnamese
- Thai
- Chinese
- Hindi
- Swahili
- Urdu
Since the base model was pre-trained trained on 100 different languages, the
model has shown some effectiveness in languages beyond those listed above as
well. See the full list of pre-trained languages in appendix A of the
[XLM Roberata paper](https://arxiv.org/abs/1911.02116)
For English-only classification, it is recommended to use
[bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or
[a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="joeddav/xlm-roberta-large-xnli")
```
You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
classify in another:
```python
# we will classify the Russian translation of, "Who are you voting for in 2020?"
sequence_to_classify = "За кого вы голосуете в 2020 году?"
# we can specify candidate labels in Russian or any other language above:
candidate_labels = ["Europe", "public health", "politics"]
classifier(sequence_to_classify, candidate_labels)
# {'labels': ['politics', 'Europe', 'public health'],
# 'scores': [0.9048484563827515, 0.05722189322113991, 0.03792969882488251],
# 'sequence': 'За кого вы голосуете в 2020 году?'}
```
The default hypothesis template is the English, `This text is {}`. If you are working strictly within one language, it
may be worthwhile to translate this to the language you are working with:
```python
sequence_to_classify = "¿A quién vas a votar en 2020?"
candidate_labels = ["Europa", "salud pública", "política"]
hypothesis_template = "Este ejemplo es {}."
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
# {'labels': ['política', 'Europa', 'salud pública'],
# 'scores': [0.9109585881233215, 0.05954807624220848, 0.029493311420083046],
# 'sequence': '¿A quién vas a votar en 2020?'}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('joeddav/xlm-roberta-large-xnli')
tokenizer = AutoTokenizer.from_pretrained('joeddav/xlm-roberta-large-xnli')
premise = sequence
hypothesis = f'This example is {label}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
## Training
This model was pre-trained on set of 100 languages, as described in
[the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on the concatenated
MNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI
data where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for
each example come from the same original English example but the premise and hypothesis are of different languages.
| null |
Non_BioNLP
|
# xlm-roberta-large-xnli
## Model Description
This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline).
## Intended Usage
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
- English
- French
- Spanish
- German
- Greek
- Bulgarian
- Russian
- Turkish
- Arabic
- Vietnamese
- Thai
- Chinese
- Hindi
- Swahili
- Urdu
Since the base model was pre-trained trained on 100 different languages, the
model has shown some effectiveness in languages beyond those listed above as
well. See the full list of pre-trained languages in appendix A of the
[XLM Roberata paper](https://arxiv.org/abs/1911.02116)
For English-only classification, it is recommended to use
[bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or
[a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="joeddav/xlm-roberta-large-xnli")
```
You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
classify in another:
```python
# we will classify the Russian translation of, "Who are you voting for in 2020?"
sequence_to_classify = "За кого вы голосуете в 2020 году?"
# we can specify candidate labels in Russian or any other language above:
candidate_labels = ["Europe", "public health", "politics"]
classifier(sequence_to_classify, candidate_labels)
# {'labels': ['politics', 'Europe', 'public health'],
# 'scores': [0.9048484563827515, 0.05722189322113991, 0.03792969882488251],
# 'sequence': 'За кого вы голосуете в 2020 году?'}
```
The default hypothesis template is the English, `This text is {}`. If you are working strictly within one language, it
may be worthwhile to translate this to the language you are working with:
```python
sequence_to_classify = "¿A quién vas a votar en 2020?"
candidate_labels = ["Europa", "salud pública", "política"]
hypothesis_template = "Este ejemplo es {}."
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
# {'labels': ['política', 'Europa', 'salud pública'],
# 'scores': [0.9109585881233215, 0.05954807624220848, 0.029493311420083046],
# 'sequence': '¿A quién vas a votar en 2020?'}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('joeddav/xlm-roberta-large-xnli')
tokenizer = AutoTokenizer.from_pretrained('joeddav/xlm-roberta-large-xnli')
premise = sequence
hypothesis = f'This example is {label}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
## Training
This model was pre-trained on set of 100 languages, as described in
[the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on the concatenated
MNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI
data where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for
each example come from the same original English example but the premise and hypothesis are of different languages.
|
{"datasets": ["multi_nli", "xnli"], "language": ["multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur"], "license": "mit", "pipeline_tag": "zero-shot-classification", "tags": ["text-classification", "pytorch", "tensorflow"], "widget": [{"text": "За кого вы голосуете в 2020 году?", "candidate_labels": "politique étrangère, Europe, élections, affaires, politique", "multi_class": true}, {"text": "لمن تصوت في 2020؟", "candidate_labels": "السياسة الخارجية, أوروبا, الانتخابات, الأعمال, السياسة", "multi_class": true}, {"text": "2020'de kime oy vereceksiniz?", "candidate_labels": "dış politika, Avrupa, seçimler, ticaret, siyaset", "multi_class": true}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"TRANSLATION"
] | 45,309 |
Lvxue/distilled-mt5-small-test
|
Lvxue
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-10T05:05:11Z |
2022-08-17T02:26:00+00:00
| 11 | 0 |
---
datasets:
- wmt16
language:
- en
- ro
license: apache-2.0
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: distilled-mt5-small-test
results:
- task:
type: translation
name: Translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- type: bleu
value: 7.5082
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8241
- Bleu: 7.5082
- Gen Len: 44.0405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8241
- Bleu: 7.5082
- Gen Len: 44.0405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
{"datasets": ["wmt16"], "language": ["en", "ro"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilled-mt5-small-test", "results": [{"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "wmt16 ro-en", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 7.5082, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,310 |
Jotschi/Mistral-7B-v0.1-coco-caption-de
|
Jotschi
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-03-17T20:47:07Z |
2024-03-22T19:34:28+00:00
| 17 | 0 |
---
base_model: mistralai/Mistral-7B-v0.1
license: apache-2.0
---
# Model Card for Mistral7B-v0.1-coco-caption-de
This model is a fine-tuned model of the Mistral7B-v0.1 completion model and meant to produce german COCO like captions.
The [coco-karpathy-opus-de dataset](https://huggingface.co/datasets/Jotschi/coco-karpathy-opus-de) was used to tune the model for german image caption generation.
## Model Details
### Prompt format
The completion model is trained with the prompt prefix `Bildbeschreibung: `
Examples:
```xml
>>> Bildbeschreibung:
2 Hunde sitzen auf einer Bank neben einer Pflanze
>>> Bildbeschreibung: Wasser
fall und Felsen vor dem Gebäude mit Blick auf den Fluss.
>>> Bildbeschreibung: Ein grünes Auto mit roten
Reflektoren parkte auf dem Parkplatz.
```
### Model Description
- **Developed by:** [Jotschi](https://huggingface.co/Jotschi)
- **License:** [Apache License](https://www.apache.org/licenses/LICENSE-2.0)
- **Finetuned from model:** [Mistral7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
## Uses
The model is meant to be used in conjunction with a [BLIP2](https://huggingface.co/docs/transformers/model_doc/blip-2) Q-Former to enable image captioning, visual question answering (VQA) and chat-like conversations.
## Training Details
The preliminary [training script](https://github.com/Jotschi/lavis-experiments/tree/master/mistral-deepspeed) uses PEFT and DeepSpeed to execute the traininng.
### Training Data
* [coco-karpathy-opus-de dataset](https://huggingface.co/datasets/Jotschi/coco-karpathy-opus-de)
### Training Procedure
The model was trained using PEFT 4Bit Q-LoRA with the following parameters:
* rank: 256
* alpha: 16
* steps: 8500
* bf16: True
* lr_scheduler_type: cosine
* warmup_ratio: 0.03
* gradient accumulation steps: 2
* batch size: 4
* Input sequence length: 512
* Learning Rate: 2.0e-5
#### Postprocessing
The merged model was saved using `PeftModel` API.
### Framework versions
- PEFT 0.8.2
| null |
Non_BioNLP
|
# Model Card for Mistral7B-v0.1-coco-caption-de
This model is a fine-tuned model of the Mistral7B-v0.1 completion model and meant to produce german COCO like captions.
The [coco-karpathy-opus-de dataset](https://huggingface.co/datasets/Jotschi/coco-karpathy-opus-de) was used to tune the model for german image caption generation.
## Model Details
### Prompt format
The completion model is trained with the prompt prefix `Bildbeschreibung: `
Examples:
```xml
>>> Bildbeschreibung:
2 Hunde sitzen auf einer Bank neben einer Pflanze
>>> Bildbeschreibung: Wasser
fall und Felsen vor dem Gebäude mit Blick auf den Fluss.
>>> Bildbeschreibung: Ein grünes Auto mit roten
Reflektoren parkte auf dem Parkplatz.
```
### Model Description
- **Developed by:** [Jotschi](https://huggingface.co/Jotschi)
- **License:** [Apache License](https://www.apache.org/licenses/LICENSE-2.0)
- **Finetuned from model:** [Mistral7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
## Uses
The model is meant to be used in conjunction with a [BLIP2](https://huggingface.co/docs/transformers/model_doc/blip-2) Q-Former to enable image captioning, visual question answering (VQA) and chat-like conversations.
## Training Details
The preliminary [training script](https://github.com/Jotschi/lavis-experiments/tree/master/mistral-deepspeed) uses PEFT and DeepSpeed to execute the traininng.
### Training Data
* [coco-karpathy-opus-de dataset](https://huggingface.co/datasets/Jotschi/coco-karpathy-opus-de)
### Training Procedure
The model was trained using PEFT 4Bit Q-LoRA with the following parameters:
* rank: 256
* alpha: 16
* steps: 8500
* bf16: True
* lr_scheduler_type: cosine
* warmup_ratio: 0.03
* gradient accumulation steps: 2
* batch size: 4
* Input sequence length: 512
* Learning Rate: 2.0e-5
#### Postprocessing
The merged model was saved using `PeftModel` API.
### Framework versions
- PEFT 0.8.2
|
{"base_model": "mistralai/Mistral-7B-v0.1", "license": "apache-2.0"}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,311 |
sagui-nlp/debertinha-ptbr-xsmall-assin2-rte
|
sagui-nlp
| null |
[
"transformers",
"pytorch",
"pt",
"dataset:assin2",
"arxiv:2309.16844",
"license:mit",
"region:us"
] | 2023-09-29T16:44:28Z |
2023-10-02T13:45:06+00:00
| 30 | 1 |
---
datasets:
- assin2
language: pt
license: mit
inference: false
---
# DeBERTinha XSmall for Recognizing Textual Entailment
### **Labels**:
* 0 : There is no entailment between premise and hypothesis.
* 1 : There is entailment between premise and hypothesis.
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import numpy as np
import torch
from scipy.special import softmax
model_name = "sagui-nlp/debertinha-ptbr-xsmall-assin2-rte"
s1 = "Os homens estão cuidadosamente colocando as malas no porta-malas de um carro."
s2 = "Os homens estão colocando bagagens dentro do porta-malas de um carro."
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model_input = tokenizer(*([s1], [s2]), padding=True, return_tensors="pt")
with torch.no_grad():
output = model(**model_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) Label: {l} Score: {np.round(float(s), 4)}")
```
## Citation
```
@misc{campiotti2023debertinha,
title={DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian Portuguese Natural Language Processing Task},
author={Israel Campiotti and Matheus Rodrigues and Yuri Albuquerque and Rafael Azevedo and Alyson Andrade},
year={2023},
eprint={2309.16844},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
# DeBERTinha XSmall for Recognizing Textual Entailment
### **Labels**:
* 0 : There is no entailment between premise and hypothesis.
* 1 : There is entailment between premise and hypothesis.
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import numpy as np
import torch
from scipy.special import softmax
model_name = "sagui-nlp/debertinha-ptbr-xsmall-assin2-rte"
s1 = "Os homens estão cuidadosamente colocando as malas no porta-malas de um carro."
s2 = "Os homens estão colocando bagagens dentro do porta-malas de um carro."
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model_input = tokenizer(*([s1], [s2]), padding=True, return_tensors="pt")
with torch.no_grad():
output = model(**model_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) Label: {l} Score: {np.round(float(s), 4)}")
```
## Citation
```
@misc{campiotti2023debertinha,
title={DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian Portuguese Natural Language Processing Task},
author={Israel Campiotti and Matheus Rodrigues and Yuri Albuquerque and Rafael Azevedo and Alyson Andrade},
year={2023},
eprint={2309.16844},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"datasets": ["assin2"], "language": "pt", "license": "mit", "inference": false}
|
task
|
[
"TEXTUAL_ENTAILMENT"
] | 45,312 |
SEBIS/code_trans_t5_base_code_documentation_generation_php_transfer_learning_finetune
|
SEBIS
|
summarization
|
[
"transformers",
"pytorch",
"jax",
"t5",
"feature-extraction",
"summarization",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-06-23T04:40:47+00:00
| 132 | 0 |
---
tags:
- summarization
widget:
- text: 'public static function update ( $ table ) { if ( ! is_array ( $ table ) )
{ $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists
( $ table [ ''oldName'' ] ) ) { throw SchemaException :: tableDoesNotExist ( $
table [ ''oldName'' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable
( ) ; }'
---
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/php/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 65,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
| null |
Non_BioNLP
|
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/php/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 65,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
|
task
|
[
"SUMMARIZATION"
] | 45,313 |
LoneStriker/opus-v1.2-70b-4.65bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"axolotl",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | 2024-03-13T21:37:47Z |
2024-03-13T21:54:13+00:00
| 10 | 0 |
---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
---
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-70b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
- Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
<img src="/dreamgen/opus-v1.2-70b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
### Steerable Story-writing and Role-playing:
- [Story-writing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-writing)
- [Role-playing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-role-playing)
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
### Story plot summarization
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
#### Examples:
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 50 words</summary>
<blockquote>
Malygris, a powerful magician, summons his long-dead lover Nylissa from the grave using necromancy. However, upon seeing her, he realizes that she is not the same person he once knew, leading him to question the nature of love, memory, and the power of magic.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 200 words</summary>
<blockquote>
The story is a dark fantasy set in the city of Susran, where an ancient and powerful magician named Malygris dwells in a tower atop a conical hill. Malygris, who has mastered all forms of sorcery and necromancy, is plagued by a deep sense of ennui and melancholy. He seeks solace in the memory of Nylissa, a girl he once loved in his youth.
One day, Malygris decides to summon Nylissa's spirit from the grave using his necromantic powers. He performs the necessary rituals and incantations, and Nylissa appears before him as a ghostly apparition. However, as Malygris gazes upon her, he becomes increasingly disillusioned. He notices subtle changes in her appearance and behavior that do not match his memories of her. He begins to question whether this is truly the same Nylissa he once knew, or if it is just another illusion conjured by his magic.
As Malygris grapples with this uncertainty, he becomes increasingly despondent and disillusioned with life itself. He realizes that even his magic cannot bring back the past or make him truly happy. In the end, he dismisses Nylissa's spirit and returns to his lonely tower, resigned to his fate as an old and weary sorcerer.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 150 words</summary>
<blockquote>
The article discusses the recent human deaths from diseases transmitted by animals, specifically Alaskapox and bubonic plague. Dr. Leana Wen, an emergency physician and former Baltimore health commissioner, explains the various ways household pets can transmit diseases to humans. These include fecal-oral transmission, scratches and bites, and vector-borne diseases like ticks, fleas, and mosquitoes. Wen emphasizes the importance of vaccinating pets, maintaining cleanliness, and taking commonsense precautions to reduce disease transmission. She also advises that individuals who are immunocompromised, babies, and pregnant women should take additional care when interacting with animals.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 50 words</summary>
<blockquote>
A doctor discusses the risks of diseases transmitted from animals to humans, focusing on household pets like cats and dogs. They emphasize the importance of vaccination, hygiene, and regular veterinary care to prevent these illnesses.
</blockquote>
</details>
### Story character description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
#### Examples:
<details>
<summary>Description of the Malygris character from <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a> in 50 words</summary>
<blockquote>
Malygris is an ancient and powerful magician, characterized by his long white beard, cold green eyes, and a violet robe. He is a master of necromancy and possesses a vast knowledge of forbidden magic. Despite his immense power, he is plagued by a deep sense of melancholy and ennui.
</blockquote>
</details>
<details>
<summary>Description of the prince character from <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a> in 50 words</summary>
<blockquote>
The prince is a devoted and determined individual, driven by his love for his late queen. He is described as a graceful youth in the beginning, but as he grows older, his dedication to the Pearl of Love transforms him into a grave and intent man. His sense of color and design evolves over time, reflecting his growing maturity and artistic vision.
</blockquote>
</details>
### Story style description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a></summary>
<blockquote>
The story is written in a descriptive and evocative style, with a focus on the atmosphere and setting. The author uses vivid and detailed language to create a sense of place and time, and employs a variety of literary devices such as similes, metaphors, and allusions to enhance the mood and tone of the narrative. The sentence structure is varied, with a mix of short and long sentences that create a rhythmic flow to the prose.
</blockquote>
</details>
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a></summary>
<blockquote>
The writing style is descriptive and evocative, with a focus on the beauty and grandeur of the Pearl of Love. The author uses vivid imagery and sensory details to create a rich and immersive atmosphere. The tone is reverential and contemplative, reflecting the prince's deep love for his queen and his dedication to creating a lasting monument to her memory.
</blockquote>
</details>
### Story description to chapters
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
### And more...
## Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
- 34B: [no quant (opus-v1.2-70b)](https://huggingface.co/dreamgen/opus-v1.2-70b) or [awq (opus-v1.2-70b-awq)](https://huggingface.co/dreamgen/opus-v1.2-70b-awq)
### Running on DreamGen.com (free)
You can run the models on [dreamgen.com](https://dreamgen.com) for free — you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api).
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Settings](https://huggingface.co/dreamgen/opus-v1.2-70b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- [Settings screenshot](configs/silly_tavern/settings_screenshot.webp)
- Make sure to unselect `Skip special tokens`, otherwise it won't work
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **There's a bug** in LM Studio if you delete a message or click "Continue", [see here for details](https://discord.com/channels/1110598183144399058/1212665261128417280/1212665261128417280).
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. Also llama.cpp may not tokenize correctly (the Yi tokenizer is subtly different from the Llama 2 tokenizer).
## License
- This model is intended for personal use only, other use is not permitted.
| null |
Non_BioNLP
|
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-70b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
- Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
<img src="/dreamgen/opus-v1.2-70b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
### Steerable Story-writing and Role-playing:
- [Story-writing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-writing)
- [Role-playing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-role-playing)
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
### Story plot summarization
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
#### Examples:
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 50 words</summary>
<blockquote>
Malygris, a powerful magician, summons his long-dead lover Nylissa from the grave using necromancy. However, upon seeing her, he realizes that she is not the same person he once knew, leading him to question the nature of love, memory, and the power of magic.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 200 words</summary>
<blockquote>
The story is a dark fantasy set in the city of Susran, where an ancient and powerful magician named Malygris dwells in a tower atop a conical hill. Malygris, who has mastered all forms of sorcery and necromancy, is plagued by a deep sense of ennui and melancholy. He seeks solace in the memory of Nylissa, a girl he once loved in his youth.
One day, Malygris decides to summon Nylissa's spirit from the grave using his necromantic powers. He performs the necessary rituals and incantations, and Nylissa appears before him as a ghostly apparition. However, as Malygris gazes upon her, he becomes increasingly disillusioned. He notices subtle changes in her appearance and behavior that do not match his memories of her. He begins to question whether this is truly the same Nylissa he once knew, or if it is just another illusion conjured by his magic.
As Malygris grapples with this uncertainty, he becomes increasingly despondent and disillusioned with life itself. He realizes that even his magic cannot bring back the past or make him truly happy. In the end, he dismisses Nylissa's spirit and returns to his lonely tower, resigned to his fate as an old and weary sorcerer.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 150 words</summary>
<blockquote>
The article discusses the recent human deaths from diseases transmitted by animals, specifically Alaskapox and bubonic plague. Dr. Leana Wen, an emergency physician and former Baltimore health commissioner, explains the various ways household pets can transmit diseases to humans. These include fecal-oral transmission, scratches and bites, and vector-borne diseases like ticks, fleas, and mosquitoes. Wen emphasizes the importance of vaccinating pets, maintaining cleanliness, and taking commonsense precautions to reduce disease transmission. She also advises that individuals who are immunocompromised, babies, and pregnant women should take additional care when interacting with animals.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 50 words</summary>
<blockquote>
A doctor discusses the risks of diseases transmitted from animals to humans, focusing on household pets like cats and dogs. They emphasize the importance of vaccination, hygiene, and regular veterinary care to prevent these illnesses.
</blockquote>
</details>
### Story character description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
#### Examples:
<details>
<summary>Description of the Malygris character from <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a> in 50 words</summary>
<blockquote>
Malygris is an ancient and powerful magician, characterized by his long white beard, cold green eyes, and a violet robe. He is a master of necromancy and possesses a vast knowledge of forbidden magic. Despite his immense power, he is plagued by a deep sense of melancholy and ennui.
</blockquote>
</details>
<details>
<summary>Description of the prince character from <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a> in 50 words</summary>
<blockquote>
The prince is a devoted and determined individual, driven by his love for his late queen. He is described as a graceful youth in the beginning, but as he grows older, his dedication to the Pearl of Love transforms him into a grave and intent man. His sense of color and design evolves over time, reflecting his growing maturity and artistic vision.
</blockquote>
</details>
### Story style description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a></summary>
<blockquote>
The story is written in a descriptive and evocative style, with a focus on the atmosphere and setting. The author uses vivid and detailed language to create a sense of place and time, and employs a variety of literary devices such as similes, metaphors, and allusions to enhance the mood and tone of the narrative. The sentence structure is varied, with a mix of short and long sentences that create a rhythmic flow to the prose.
</blockquote>
</details>
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a></summary>
<blockquote>
The writing style is descriptive and evocative, with a focus on the beauty and grandeur of the Pearl of Love. The author uses vivid imagery and sensory details to create a rich and immersive atmosphere. The tone is reverential and contemplative, reflecting the prince's deep love for his queen and his dedication to creating a lasting monument to her memory.
</blockquote>
</details>
### Story description to chapters
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
### And more...
## Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
- 34B: [no quant (opus-v1.2-70b)](https://huggingface.co/dreamgen/opus-v1.2-70b) or [awq (opus-v1.2-70b-awq)](https://huggingface.co/dreamgen/opus-v1.2-70b-awq)
### Running on DreamGen.com (free)
You can run the models on [dreamgen.com](https://dreamgen.com) for free — you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api).
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Settings](https://huggingface.co/dreamgen/opus-v1.2-70b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- [Settings screenshot](configs/silly_tavern/settings_screenshot.webp)
- Make sure to unselect `Skip special tokens`, otherwise it won't work
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **There's a bug** in LM Studio if you delete a message or click "Continue", [see here for details](https://discord.com/channels/1110598183144399058/1212665261128417280/1212665261128417280).
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. Also llama.cpp may not tokenize correctly (the Yi tokenizer is subtly different from the Llama 2 tokenizer).
## License
- This model is intended for personal use only, other use is not permitted.
|
{"language": ["en"], "pipeline_tag": "text-generation", "tags": ["unsloth", "axolotl"]}
|
task
|
[
"SUMMARIZATION"
] | 45,314 |
Triangle104/granite-3.1-2b-base-Q6_K-GGUF
|
Triangle104
| null |
[
"transformers",
"gguf",
"language",
"granite-3.1",
"llama-cpp",
"gguf-my-repo",
"base_model:ibm-granite/granite-3.1-2b-base",
"base_model:quantized:ibm-granite/granite-3.1-2b-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-12-26T11:55:23Z |
2024-12-26T12:29:33+00:00
| 3 | 0 |
---
base_model: ibm-granite/granite-3.1-2b-base
library_name: transformers
license: apache-2.0
tags:
- language
- granite-3.1
- llama-cpp
- gguf-my-repo
---
# Triangle104/granite-3.1-2b-base-Q6_K-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3.1-2b-base`](https://huggingface.co/ibm-granite/granite-3.1-2b-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.1-2b-base) for more details on the model.
---
Model details:
-
Granite-3.1-2B-Base extends the context length of Granite-3.0-2B-Base
from 4K to 128K using a progressive training strategy by increasing the
supported context length in increments while adjusting RoPE theta until
the model has successfully adapted to desired length of 128K. This
long-context pre-training stage was performed using approximately 500B
tokens.
Developers: Granite Team, IBM
GitHub Repository: ibm-granite/granite-3.1-language-models
Website: Granite Docs
Paper: Granite 3.1 Language Models (coming soon)
Release Date: December 18th, 2024
License: Apache 2.0
Supported Languages:
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech,
Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1
models for languages beyond these 12 languages.
Intended Use:
Prominent use cases of LLMs in text-to-text generation include
summarization, text classification, extraction, question-answering, and
other long-context tasks. All Granite Base models are able to handle
these tasks as they were trained on a large amount of data from various
domains. Moreover, they can serve as baseline to create specialized
models for specific application scenarios.
Generation:
This is a simple example of how to use Granite-3.1-2B-Base model.
Install the following libraries:
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
Then, copy the code snippet below to run the example.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "auto"
model_path = "ibm-granite/granite-3.1-2b-base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "Where is the Thomas J. Watson Research Center located?"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens,
max_length=4000)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)
Model Architecture:
Granite-3.1-2B-Base is based on a decoder-only dense transformer
architecture. Core components of this architecture are: GQA and RoPE,
MLP with SwiGLU, RMSNorm, and shared input/output embeddings.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/granite-3.1-2b-base-Q6_K-GGUF --hf-file granite-3.1-2b-base-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/granite-3.1-2b-base-Q6_K-GGUF --hf-file granite-3.1-2b-base-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/granite-3.1-2b-base-Q6_K-GGUF --hf-file granite-3.1-2b-base-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/granite-3.1-2b-base-Q6_K-GGUF --hf-file granite-3.1-2b-base-q6_k.gguf -c 2048
```
| null |
Non_BioNLP
|
# Triangle104/granite-3.1-2b-base-Q6_K-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3.1-2b-base`](https://huggingface.co/ibm-granite/granite-3.1-2b-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.1-2b-base) for more details on the model.
---
Model details:
-
Granite-3.1-2B-Base extends the context length of Granite-3.0-2B-Base
from 4K to 128K using a progressive training strategy by increasing the
supported context length in increments while adjusting RoPE theta until
the model has successfully adapted to desired length of 128K. This
long-context pre-training stage was performed using approximately 500B
tokens.
Developers: Granite Team, IBM
GitHub Repository: ibm-granite/granite-3.1-language-models
Website: Granite Docs
Paper: Granite 3.1 Language Models (coming soon)
Release Date: December 18th, 2024
License: Apache 2.0
Supported Languages:
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech,
Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1
models for languages beyond these 12 languages.
Intended Use:
Prominent use cases of LLMs in text-to-text generation include
summarization, text classification, extraction, question-answering, and
other long-context tasks. All Granite Base models are able to handle
these tasks as they were trained on a large amount of data from various
domains. Moreover, they can serve as baseline to create specialized
models for specific application scenarios.
Generation:
This is a simple example of how to use Granite-3.1-2B-Base model.
Install the following libraries:
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
Then, copy the code snippet below to run the example.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "auto"
model_path = "ibm-granite/granite-3.1-2b-base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "Where is the Thomas J. Watson Research Center located?"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens,
max_length=4000)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)
Model Architecture:
Granite-3.1-2B-Base is based on a decoder-only dense transformer
architecture. Core components of this architecture are: GQA and RoPE,
MLP with SwiGLU, RMSNorm, and shared input/output embeddings.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/granite-3.1-2b-base-Q6_K-GGUF --hf-file granite-3.1-2b-base-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/granite-3.1-2b-base-Q6_K-GGUF --hf-file granite-3.1-2b-base-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/granite-3.1-2b-base-Q6_K-GGUF --hf-file granite-3.1-2b-base-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/granite-3.1-2b-base-Q6_K-GGUF --hf-file granite-3.1-2b-base-q6_k.gguf -c 2048
```
|
{"base_model": "ibm-granite/granite-3.1-2b-base", "library_name": "transformers", "license": "apache-2.0", "tags": ["language", "granite-3.1", "llama-cpp", "gguf-my-repo"]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | 45,315 |
google/pegasus-reddit_tifu
|
google
|
summarization
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-01-24T16:42:44+00:00
| 135 | 3 |
---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
task
|
[
"SUMMARIZATION"
] | 45,316 |
Omerhan/checkpoint-120-aixr
|
Omerhan
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:19931",
"loss:MultipleNegativesRankingLoss",
"tr",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-04T12:13:32Z |
2025-02-04T12:14:48+00:00
| 5 | 0 |
---
base_model: intfloat/multilingual-e5-large-instruct
language:
- tr
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:19931
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: En yakın restoranı bulun.
sentences:
- '"Hangi parkurlarda yürüyüş yapmayı tercih ediyorsun?"'
- Brooklyn, NY
- 'En yakın restoranı bulmak için aşağıdaki yöntemleri kullanabilirsiniz:
1. **Harita Uygulamaları**: Google Maps veya Apple Maps gibi harita uygulamaları,
yakınınızdaki restoranları listeleyebilir. Bu uygulamalarda "restoran" araması
yaparak konumunuza en yakın seçenekleri görüntüleyebilirsiniz.
2. **Yelp veya TripAdvisor**: Bu uygulamalar, kullanıcı yorumları ve puanlamalarına
göre restoranları sıralayarak en iyi seçenekleri bulmanızı sağlar.
3. **OpenTable**: Bu uygulama, özellikle restoran rezervasyonu yapmak isterseniz
faydalı olabilir.
Brooklyn, NY''de birçok restoran seçeneği bulunmaktadır, bu yüzden belirli bir
tür mutfak veya yemek arıyorsanız aramanızı daraltmak iyi bir strateji olabilir.'
- source_sentence: Verilen cümlede tek tırnak işaretlerini (') çift tırnak işaretleriyle
(") değiştirin.
sentences:
- "Verilen doğrusal sistemi çözmek için iki denklem arasında eliminasyon veya yerine\
\ koyma yöntemlerinden birini kullanabiliriz. Burada yerine koyma yöntemini kullanarak\
\ çözelim.\n\n1. İlk denklemden y'yi yalnız bırakın:\n\n\\[ 2x + y = 5 \\] \n\n\
Buradan, \\[ y = 5 - 2x \\] olur.\n\n2. Bulduğumuz \\( y = 5 - 2x \\) ifadesini\
\ ikinci denklemde yerine koyun:\n\n\\[ -x + 3y = 4 \\]\n\n\\[ -x + 3(5 - 2x)\
\ = 4 \\]\n\n3. Dağıtımı yapalım:\n\n\\[ -x + 15 - 6x = 4 \\]\n\n4. Benzer terimleri\
\ toplayalım:\n\n\\[ -7x + 15 = 4 \\]\n\n5. Her iki taraftan 15 çıkarın:\n\n\\\
[ -7x = 4 - 15 \\]\n\n\\[ -7x = -11 \\]\n\n6. Her iki tarafı da -7'ye bölün:\n\
\n\\[ x = \\frac{-11}{-7} \\]\n\n\\[ x = \\frac{11}{7} \\]\n\n7. Bulduğumuz \\\
( x \\) değerini \\( y = 5 - 2x \\) ifadesinde yerine koyun:\n\n\\[ y = 5 - 2\\\
left(\\frac{11}{7}\\right) \\]\n\n\\[ y = 5 - \\frac{22}{7} \\]\n\n\\[ y = \\\
frac{35}{7} - \\frac{22}{7} \\]\n\n\\[ y = \\frac{13}{7} \\]\n\nBu durumda doğrusal\
\ sistemin çözümü:\n\n\\[ x = \\frac{11}{7} \\] ve \\[ y = \\frac{13}{7} \\] olur."
- ABD'de, KIPP bir "kamu-özel ortaklığıdır".
- ABD'de, KIPP bir 'kamu-özel ortaklığıdır'.
- source_sentence: 'Bir saatin fiyatı 50 dolardır.
(50 dolarlık saat)'
sentences:
- Etkinliğe katıldı.
- Ev sahibi takımın zaferi
- 'Sanırım iki farklı konudan bahsediyorsunuz: biri bir saat fiyatı, diğeri ise
ev sahibi takımın zaferi. Bu konulardan hangisi hakkında daha fazla bilgi almak
istersiniz? Ya da her iki konuyu kapsayan belirli bir sorunuz var mı?'
- source_sentence: Yakındaki bir restoran bulun, <insert cuisine> mutfağı sunan.
sentences:
- 'Kuadratik denklemi çözmek için diskriminant yöntemini kullanabiliriz. Genel formdaki
bir kuadratik denklem \( ax^2 + bx + c = 0 \) şeklindedir. Burada, \( a = 3 \),
\( b = 7 \) ve \( c = -4 \) olarak verilmiştir.
Diskriminant (\( \Delta \)), \( b^2 - 4ac \) formülü ile hesaplanır:
\[
\Delta = 7^2 - 4 \cdot 3 \cdot (-4) = 49 + 48 = 97
\]
Diskriminant pozitif olduğundan, iki gerçek ve farklı kök vardır. Kuadratik formül
kullanılarak kökler bulunabilir:
\[
x = \frac{-b \pm \sqrt{\Delta}}{2a}
\]
Bu formüle bağlı olarak kökleri bulalım:
\[
x_1 = \frac{-7 + \sqrt{97}}{6}
\]
\[
x_2 = \frac{-7 - \sqrt{97}}{6}
\]
Bu, orijinal denklemin iki çözümünü verir:
\[
x_1 = \frac{-7 + \sqrt{97}}{6} \quad \text{ve} \quad x_2 = \frac{-7 - \sqrt{97}}{6}
\]'
- 'Üzgünüm, yakındaki restoranları veya mekanları aramak gibi anlık coğrafi konum
verilerine erişimim yok. Ancak, İtalyan mutfağından bir yemek tarifi verebilirim!
İşte klasik bir İtalyan tarifi: Spaghetti Aglio e Olio.
### Spaghetti Aglio e Olio Tarifi
#### Malzemeler:
- 400 gram spagetti
- 4-5 diş sarımsak
- 1/2 çay bardağı zeytinyağı
- 1/2 çay kaşığı kırmızı pul biber (isteğe bağlı)
- Tuz
- Karabiber
- Yarım demet taze maydanoz
- Üzeri için rendelenmiş Parmesan peyniri (isteğe bağlı)
#### Yapılışı:
1. **Spagettiyi Haşlayın**: Büyük bir tencerede suyu kaynatın. Kaynayan suya yeterince
tuz ekleyin ve spagettiyi pakette belirtilen süreye göre al dente (dişe gelir)
şekilde haşlayın. Haşlandıktan sonra makarnanın suyundan bir bardak ayırın ve
makarnayı süzün.
2. **Sarımsak ve Zeytinyağını Hazırlayın**: Sarımsakları ince ince dilimleyin.
Geniş bir tavada, düşük ateşte zeytinyağını ısıtın. Sarımsakları ekleyin ve altın
rengi alana kadar yavaşça soteleyin. Eğer biraz acı seviyorsanız kırmızı pul biberi
de bu aşamada ekleyebilirsiniz.
3. **Makarnayı Karıştırın**: Haşlanmış ve süzülmüş makarnayı tavadaki sarımsaklı
zeytinyağının içine ekleyin. Ayırdığınız makarna suyundan da azar azar ekleyerek
karıştırın. Tuz ve karabiber ile tatlandırın.
4. **Servis Edin**: İnce kıyılmış taze maydanozu makarnanın üzerine serpin ve
karıştırın. İsterseniz üzerine rendelenmiş Parmesan peyniri de ekleyerek servis
yapın.
Afiyet olsun! Bu basit ama lezzetli İtalyan yemeği, hızlı bir akşam yemeği için
harika bir seçenektir.'
- <ekle yemek tarifi> - İtalyan
- source_sentence: Bu cümleyi etken bir yapı kullanarak yeniden yazın.
sentences:
- Kaptan tekneyi çarptı.
- "Verilen listeyi sıralamak için basit bir sıralama algoritmasının nasıl çalıştığını\
\ açıklayayım. Örnek olarak \"Seçmeli Sıralama\" (Selection Sort) algoritmasını\
\ kullanabiliriz. Bu algoritma, her adımda dizinin geri kalan kısmından en küçük\
\ (veya en büyük) öğeyi seçip, bunu sıralanmamış bölümün başına yerleştirir. İşte\
\ adım adım nasıl çalıştığı:\n\n1. **Başla**: İlk öğeden başlayarak liste üzerinde\
\ ilerle.\n\n2. **En küçük elemanı bul**: \n - Mevcut konumdan başlayarak geri\
\ kalan elemanlar arasında en küçük değeri bul.\n - Listenin ilk konumunda (0\
\ indeksli) başla ve bu elemanı hafızada \"şimdilik en küçük\" olarak tut.\n\n\
3. **Yer değiştir**:\n - Geri kalan elemanlar arasında bulunmuş olan en küçük\
\ değeri, mevcut konumda olan elemanla yer değiştir.\n\n4. **Sonraki pozisyona\
\ geç**:\n - Bir konum ilerle ve 2. adımdan itibaren devam et.\n\n5. **Bittiğinde**:\
\ Liste baştan sona kontrol edildiğinde ve tüm elemanlar sırasıyla yerleştirildiğinde,\
\ işlem tamamlanmıştır.\n\nVerilen örnek [4, 1, 6, 5, 2] listesi üzerinden bunu\
\ adım adım uygulayalım:\n\n- Başlangıç listesi: [4, 1, 6, 5, 2]\n\n**1. Adım:**\n\
- En küçük eleman 1, onu ilk pozisyondaki 4 ile değiştir.\n- Liste: [1, 4, 6,\
\ 5, 2]\n\n**2. Adım:**\n- Kalan liste [4, 6, 5, 2] içinde en küçük eleman 2,\
\ bunu 4 ile değiştir.\n- Liste: [1, 2, 6, 5, 4]\n\n**3. Adım:**\n- Kalan liste\
\ [6, 5, 4] içinde en küçük eleman 4, bunu 6 ile değiştir.\n- Liste: [1, 2, 4,\
\ 5, 6]\n\n**4. Adım:**\n- Kalan liste [5, 6] içinde en küçük eleman 5, bu zaten\
\ yerinde.\n- Liste: [1, 2, 4, 5, 6]\n\n**5. Adım:**\n- Tek kalan eleman 6, o\
\ da yerinde. Sıralama tamamlandı.\n\nBu algoritmanın zaman karmaşıklığı \\(O(n^2)\\\
) olduğundan, daha büyük listelerde daha verimli bir algoritma kullanmak isteyebilirsiniz,\
\ örneğin Hızlı Sıralama (Quick Sort) veya Birleştirme Sıralaması (Merge Sort).\
\ Ancak, küçük listeler için Seçmeli Sıralama anlaşılır ve yeterlidir."
- Tekne kaptan tarafından çarpıldı.
---
# intfloat-fine-tuned
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision c9e87c786ffac96aeaeb42863276930883923ecb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omerhan/checkpoint-120-aixr")
# Run inference
sentences = [
'Bu cümleyi etken bir yapı kullanarak yeniden yazın.',
'Kaptan tekneyi çarptı.',
'Tekne kaptan tarafından çarpıldı.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 19,931 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.25 tokens</li><li>max: 160 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 128.76 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 16.82 tokens</li><li>max: 185 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------|
| <code>Tek farklı olanı belirleyin.</code> | <code>Tek farklı olan Telegram'dır. Bunun sebebi, Twitter ve Instagram'ın ana odaklarının sosyal medya platformları olması ve daha çok kullanıcıların içerik paylaşımına odaklanması iken, Telegram'ın öncelikli olarak bir mesajlaşma uygulaması olmasıdır. Telegram, kullanıcılar arasında mesajlaşma, medya dosyaları paylaşma ve grup sohbetleri oluşturma gibi özelliklerle öne çıkar.</code> | <code>Twitter, Instagram, Telegram</code> |
| <code>Aşağıdaki cümleyi etken çatı kullanarak yeniden yazın.</code> | <code>Kaptan haber raporunu okudu.</code> | <code>Haber raporu kaptan tarafından okundu.</code> |
| <code>Bu cümleyi yazım ve dilbilgisi hataları için değerlendirin.</code> | <code>Bu cümlede yazım ve dilbilgisi açısından herhangi bir hata bulunmamaktadır. Cümle doğru bir şekilde yazılmıştır.</code> | <code>O yemeğini bitirdi ve restorandan ayrıldı.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `gradient_accumulation_steps`: 8
- `learning_rate`: 1e-06
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.01
- `tf32`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# intfloat-fine-tuned
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision c9e87c786ffac96aeaeb42863276930883923ecb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omerhan/checkpoint-120-aixr")
# Run inference
sentences = [
'Bu cümleyi etken bir yapı kullanarak yeniden yazın.',
'Kaptan tekneyi çarptı.',
'Tekne kaptan tarafından çarpıldı.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 19,931 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.25 tokens</li><li>max: 160 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 128.76 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 16.82 tokens</li><li>max: 185 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------|
| <code>Tek farklı olanı belirleyin.</code> | <code>Tek farklı olan Telegram'dır. Bunun sebebi, Twitter ve Instagram'ın ana odaklarının sosyal medya platformları olması ve daha çok kullanıcıların içerik paylaşımına odaklanması iken, Telegram'ın öncelikli olarak bir mesajlaşma uygulaması olmasıdır. Telegram, kullanıcılar arasında mesajlaşma, medya dosyaları paylaşma ve grup sohbetleri oluşturma gibi özelliklerle öne çıkar.</code> | <code>Twitter, Instagram, Telegram</code> |
| <code>Aşağıdaki cümleyi etken çatı kullanarak yeniden yazın.</code> | <code>Kaptan haber raporunu okudu.</code> | <code>Haber raporu kaptan tarafından okundu.</code> |
| <code>Bu cümleyi yazım ve dilbilgisi hataları için değerlendirin.</code> | <code>Bu cümlede yazım ve dilbilgisi açısından herhangi bir hata bulunmamaktadır. Cümle doğru bir şekilde yazılmıştır.</code> | <code>O yemeğini bitirdi ve restorandan ayrıldı.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `gradient_accumulation_steps`: 8
- `learning_rate`: 1e-06
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.01
- `tf32`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "intfloat/multilingual-e5-large-instruct", "language": ["tr"], "library_name": "sentence-transformers", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:19931", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "En yakın restoranı bulun.", "sentences": ["\"Hangi parkurlarda yürüyüş yapmayı tercih ediyorsun?\"", "Brooklyn, NY", "En yakın restoranı bulmak için aşağıdaki yöntemleri kullanabilirsiniz:\n\n1. **Harita Uygulamaları**: Google Maps veya Apple Maps gibi harita uygulamaları, yakınınızdaki restoranları listeleyebilir. Bu uygulamalarda \"restoran\" araması yaparak konumunuza en yakın seçenekleri görüntüleyebilirsiniz.\n\n2. **Yelp veya TripAdvisor**: Bu uygulamalar, kullanıcı yorumları ve puanlamalarına göre restoranları sıralayarak en iyi seçenekleri bulmanızı sağlar.\n\n3. **OpenTable**: Bu uygulama, özellikle restoran rezervasyonu yapmak isterseniz faydalı olabilir.\n\nBrooklyn, NY'de birçok restoran seçeneği bulunmaktadır, bu yüzden belirli bir tür mutfak veya yemek arıyorsanız aramanızı daraltmak iyi bir strateji olabilir."]}, {"source_sentence": "Verilen cümlede tek tırnak işaretlerini (') çift tırnak işaretleriyle (\") değiştirin.", "sentences": ["Verilen doğrusal sistemi çözmek için iki denklem arasında eliminasyon veya yerine koyma yöntemlerinden birini kullanabiliriz. Burada yerine koyma yöntemini kullanarak çözelim.\n\n1. İlk denklemden y'yi yalnız bırakın:\n\n\\[ 2x + y = 5 \\] \n\nBuradan, \\[ y = 5 - 2x \\] olur.\n\n2. Bulduğumuz \\( y = 5 - 2x \\) ifadesini ikinci denklemde yerine koyun:\n\n\\[ -x + 3y = 4 \\]\n\n\\[ -x + 3(5 - 2x) = 4 \\]\n\n3. Dağıtımı yapalım:\n\n\\[ -x + 15 - 6x = 4 \\]\n\n4. Benzer terimleri toplayalım:\n\n\\[ -7x + 15 = 4 \\]\n\n5. Her iki taraftan 15 çıkarın:\n\n\\[ -7x = 4 - 15 \\]\n\n\\[ -7x = -11 \\]\n\n6. Her iki tarafı da -7'ye bölün:\n\n\\[ x = \\frac{-11}{-7} \\]\n\n\\[ x = \\frac{11}{7} \\]\n\n7. Bulduğumuz \\( x \\) değerini \\( y = 5 - 2x \\) ifadesinde yerine koyun:\n\n\\[ y = 5 - 2\\left(\\frac{11}{7}\\right) \\]\n\n\\[ y = 5 - \\frac{22}{7} \\]\n\n\\[ y = \\frac{35}{7} - \\frac{22}{7} \\]\n\n\\[ y = \\frac{13}{7} \\]\n\nBu durumda doğrusal sistemin çözümü:\n\n\\[ x = \\frac{11}{7} \\] ve \\[ y = \\frac{13}{7} \\] olur.", "ABD'de, KIPP bir \"kamu-özel ortaklığıdır\".", "ABD'de, KIPP bir 'kamu-özel ortaklığıdır'."]}, {"source_sentence": "Bir saatin fiyatı 50 dolardır.\n(50 dolarlık saat)", "sentences": ["Etkinliğe katıldı.", "Ev sahibi takımın zaferi", "Sanırım iki farklı konudan bahsediyorsunuz: biri bir saat fiyatı, diğeri ise ev sahibi takımın zaferi. Bu konulardan hangisi hakkında daha fazla bilgi almak istersiniz? Ya da her iki konuyu kapsayan belirli bir sorunuz var mı?"]}, {"source_sentence": "Yakındaki bir restoran bulun, <insert cuisine> mutfağı sunan.", "sentences": ["Kuadratik denklemi çözmek için diskriminant yöntemini kullanabiliriz. Genel formdaki bir kuadratik denklem \\( ax^2 + bx + c = 0 \\) şeklindedir. Burada, \\( a = 3 \\), \\( b = 7 \\) ve \\( c = -4 \\) olarak verilmiştir.\n\nDiskriminant (\\( \\Delta \\)), \\( b^2 - 4ac \\) formülü ile hesaplanır:\n\n\\[\n\\Delta = 7^2 - 4 \\cdot 3 \\cdot (-4) = 49 + 48 = 97\n\\]\n\nDiskriminant pozitif olduğundan, iki gerçek ve farklı kök vardır. Kuadratik formül kullanılarak kökler bulunabilir:\n\n\\[\nx = \\frac{-b \\pm \\sqrt{\\Delta}}{2a}\n\\]\n\nBu formüle bağlı olarak kökleri bulalım:\n\n\\[\nx_1 = \\frac{-7 + \\sqrt{97}}{6}\n\\]\n\n\\[\nx_2 = \\frac{-7 - \\sqrt{97}}{6}\n\\]\n\nBu, orijinal denklemin iki çözümünü verir:\n\n\\[\nx_1 = \\frac{-7 + \\sqrt{97}}{6} \\quad \\text{ve} \\quad x_2 = \\frac{-7 - \\sqrt{97}}{6}\n\\]", "Üzgünüm, yakındaki restoranları veya mekanları aramak gibi anlık coğrafi konum verilerine erişimim yok. Ancak, İtalyan mutfağından bir yemek tarifi verebilirim! İşte klasik bir İtalyan tarifi: Spaghetti Aglio e Olio.\n\n### Spaghetti Aglio e Olio Tarifi\n\n#### Malzemeler:\n- 400 gram spagetti\n- 4-5 diş sarımsak\n- 1/2 çay bardağı zeytinyağı\n- 1/2 çay kaşığı kırmızı pul biber (isteğe bağlı)\n- Tuz\n- Karabiber\n- Yarım demet taze maydanoz\n- Üzeri için rendelenmiş Parmesan peyniri (isteğe bağlı)\n\n#### Yapılışı:\n1. **Spagettiyi Haşlayın**: Büyük bir tencerede suyu kaynatın. Kaynayan suya yeterince tuz ekleyin ve spagettiyi pakette belirtilen süreye göre al dente (dişe gelir) şekilde haşlayın. Haşlandıktan sonra makarnanın suyundan bir bardak ayırın ve makarnayı süzün.\n\n2. **Sarımsak ve Zeytinyağını Hazırlayın**: Sarımsakları ince ince dilimleyin. Geniş bir tavada, düşük ateşte zeytinyağını ısıtın. Sarımsakları ekleyin ve altın rengi alana kadar yavaşça soteleyin. Eğer biraz acı seviyorsanız kırmızı pul biberi de bu aşamada ekleyebilirsiniz.\n\n3. **Makarnayı Karıştırın**: Haşlanmış ve süzülmüş makarnayı tavadaki sarımsaklı zeytinyağının içine ekleyin. Ayırdığınız makarna suyundan da azar azar ekleyerek karıştırın. Tuz ve karabiber ile tatlandırın.\n\n4. **Servis Edin**: İnce kıyılmış taze maydanozu makarnanın üzerine serpin ve karıştırın. İsterseniz üzerine rendelenmiş Parmesan peyniri de ekleyerek servis yapın.\n\nAfiyet olsun! Bu basit ama lezzetli İtalyan yemeği, hızlı bir akşam yemeği için harika bir seçenektir.", "<ekle yemek tarifi> - İtalyan"]}, {"source_sentence": "Bu cümleyi etken bir yapı kullanarak yeniden yazın.", "sentences": ["Kaptan tekneyi çarptı.", "Verilen listeyi sıralamak için basit bir sıralama algoritmasının nasıl çalıştığını açıklayayım. Örnek olarak \"Seçmeli Sıralama\" (Selection Sort) algoritmasını kullanabiliriz. Bu algoritma, her adımda dizinin geri kalan kısmından en küçük (veya en büyük) öğeyi seçip, bunu sıralanmamış bölümün başına yerleştirir. İşte adım adım nasıl çalıştığı:\n\n1. **Başla**: İlk öğeden başlayarak liste üzerinde ilerle.\n\n2. **En küçük elemanı bul**: \n - Mevcut konumdan başlayarak geri kalan elemanlar arasında en küçük değeri bul.\n - Listenin ilk konumunda (0 indeksli) başla ve bu elemanı hafızada \"şimdilik en küçük\" olarak tut.\n\n3. **Yer değiştir**:\n - Geri kalan elemanlar arasında bulunmuş olan en küçük değeri, mevcut konumda olan elemanla yer değiştir.\n\n4. **Sonraki pozisyona geç**:\n - Bir konum ilerle ve 2. adımdan itibaren devam et.\n\n5. **Bittiğinde**: Liste baştan sona kontrol edildiğinde ve tüm elemanlar sırasıyla yerleştirildiğinde, işlem tamamlanmıştır.\n\nVerilen örnek [4, 1, 6, 5, 2] listesi üzerinden bunu adım adım uygulayalım:\n\n- Başlangıç listesi: [4, 1, 6, 5, 2]\n\n**1. Adım:**\n- En küçük eleman 1, onu ilk pozisyondaki 4 ile değiştir.\n- Liste: [1, 4, 6, 5, 2]\n\n**2. Adım:**\n- Kalan liste [4, 6, 5, 2] içinde en küçük eleman 2, bunu 4 ile değiştir.\n- Liste: [1, 2, 6, 5, 4]\n\n**3. Adım:**\n- Kalan liste [6, 5, 4] içinde en küçük eleman 4, bunu 6 ile değiştir.\n- Liste: [1, 2, 4, 5, 6]\n\n**4. Adım:**\n- Kalan liste [5, 6] içinde en küçük eleman 5, bu zaten yerinde.\n- Liste: [1, 2, 4, 5, 6]\n\n**5. Adım:**\n- Tek kalan eleman 6, o da yerinde. Sıralama tamamlandı.\n\nBu algoritmanın zaman karmaşıklığı \\(O(n^2)\\) olduğundan, daha büyük listelerde daha verimli bir algoritma kullanmak isteyebilirsiniz, örneğin Hızlı Sıralama (Quick Sort) veya Birleştirme Sıralaması (Merge Sort). Ancak, küçük listeler için Seçmeli Sıralama anlaşılır ve yeterlidir.", "Tekne kaptan tarafından çarpıldı."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,317 |
google/t5-xxl-ssm-nq
|
google
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-01-24T16:52:53+00:00
| 34 | 1 |
---
datasets:
- c4
- wikipedia
- natural_questions
language: en
license: apache-2.0
pipeline_tag: text2text-generation
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-small|https://huggingface.co/google/t5-small-ssm-nq|25.5|
|T5-large|https://huggingface.co/google/t5-large-ssm-nq|30.4|
|T5-xl|https://huggingface.co/google/t5-xl-ssm-nq|35.6|
|**T5-xxl**|**https://huggingface.co/google/t5-xxl-ssm-nq**|**37.9**|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nq|33.2|
|T5-11b|https://huggingface.co/google/t5-11b-ssm-nq|36.6|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-xxl-ssm-nq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-xxl-ssm-nq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

| null |
Non_BioNLP
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-small|https://huggingface.co/google/t5-small-ssm-nq|25.5|
|T5-large|https://huggingface.co/google/t5-large-ssm-nq|30.4|
|T5-xl|https://huggingface.co/google/t5-xl-ssm-nq|35.6|
|**T5-xxl**|**https://huggingface.co/google/t5-xxl-ssm-nq**|**37.9**|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nq|33.2|
|T5-11b|https://huggingface.co/google/t5-11b-ssm-nq|36.6|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-xxl-ssm-nq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-xxl-ssm-nq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"datasets": ["c4", "wikipedia", "natural_questions"], "language": "en", "license": "apache-2.0", "pipeline_tag": "text2text-generation"}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,318 |
winderfeld/cc-uffs-ppc
|
winderfeld
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:324",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-19T16:27:26Z |
2024-10-19T16:32:42+00:00
| 8 | 0 |
---
base_model: BAAI/bge-small-en-v1.5
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:324
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Qual tipo de documento deve ser incluído como anexo na compilação
de TCC II, conforme o Artigo 16º, e qual é a condição para que ele seja aceito?
sentences:
- "**Art. 13º**As modalidades descritas na Tabela 1 são regidas conforme o seguinte:\
\ **§ 1º**Pesquisa científica básica ou aplicada refere-se a atividades realizadas\
\ no contexto \nde aplicação de método científico para investigação de um problema\
\ relevante para a \nárea e suas interdisciplinaridades. Projeto Pedagógico do\
\ Curso de Graduação em Ciência da Computação – Bacharelado 213 **§ 2º**Inovação\
\ e Negócios refere-se à atividades realizadas pelo aluno no contexto Indus- \n\
trial e Mercadológico, seja esse baseado em empreendedorismo, inovação ou no meio\
\ \nempresarial, desde que gere um produto entregável. **§ 3º**Comprovação de\
\ experiências refere-se às atividades realizadas pelo aluno durante \no curso\
\ que comprovem um substancial crescimento profissional e a contribuição no \n\
meio na qual a atividade foi conduzida. **§ 4º**As modalidades poderão ser regulamentadas\
\ pelo Colegiado de Curso em ato deli- \nberativo. **Art. 14º**A monografia é\
\ válida como TCC II conforme o seguinte: **§ 1º**Incluir o tema, a problematização,\
\ os objetivos gerais e específicos, a justificativa, \numa abrangente revisão\
\ bibliográfica do estado da arte do tema escolhido, os procedi- \nmentos metodológicos,\
\ os resultados, as discussões, o cronograma e o orçamento do \nprojeto, quando\
\ aplicável. **§ 2º**Respeitando-se os termos de direitos autorais em vigor, deve\
\ ser integrada ao acervo \nde trabalhos digitais da UFFS e ficar disponível à\
\ comunidade acadêmica como forma \nde divulgação do conhecimento adquirido e\
\ formado. **§ 3º**Seguir as normas metodológicas preconizadas pela UFFS. **Art.\
\ 15º**O artigo científico é válido como TCC II conforme o seguinte: **§ 1º**Diagramado\
\ em formato de conferência ou periódico reconhecido pela SBC, na es- \ntrutura\
\ de duas colunas, com no mínimo 8 (oito) páginas. **§ 2º**O proponente deve ser\
\ um dos autores do artigo científico. **§ 3º**Ser endossado pelo professor-orientador\
\ do TCC. **§ 4º**Respeitando-se os termos de direitos autorais em vigor, deve\
\ ser integrado ao acerto \nde trabalhos digitais da UFFS. **§ 5º**No caso de\
\ artigo científico aceito para publicação em periódico ou conferência re- \n\
conhecidos com Qualis na área, antes da data de sua apresentação à banca, ficam\
\ sus- \npensas as obrigações do §1º do Art. 11º. **§ 6º**Não estar vinculado\
\ às modalidades Inovação e Negócio, e Comprovação de Expe- \nriências. **Art.\
\ 16º**A compilação é válida como TCC II conforme o seguinte: Projeto Pedagógico\
\ do Curso de Graduação em Ciência da Computação – Bacharelado 214 **MINISTÉRIO\
\ DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA\
\ DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA****§ 1º**Incluir introdução, procedimentos\
\ metodológicos, resultados e conclusões referen- \ntes ao tema abordado. **§\
\ 2º**Apresentar outros documentos, como relatórios, artigos resumidos, manuais,\
\ etc, no \nformato de anexos. **§ 3º**Cada documento anexo à compilação deve\
\ ser reconhecido e aceito como de nature- \nza relevante e válido pelo Colegiado\
\ do Curso ou pela Comissão de Atividades Curricu- \nlares Complementares. **§\
\ 4º**Cada documento anexo na compilação deve ser contextualizado com um texto\
\ que \nexplica como seus desdobramentos se conectam na contribuição do trabalho."
- 'Addison-Wesley Publishing, 2000. KUO, Franklin F. Multimedia Communications:
Protocols & Applications. Prentice Hall, 1997. PAULA FILHO, W. de P. Multimídia:
Conceitos e Aplicações. Rio de Janeiro: LTC Editora, 2000. VAUGHAN, T. Multimedia
Making it Work. McGraw-Hill, 2001. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| AGNEW, P. W.; KELLERMAN, A. S. Distributed Multimedia: Technologies, Applications,
and Opportunities in the Digital Information Industry. A Guide for Users and Providers.
Addison Wesley, 1996. BUFORD, J. F. K. Multimedia Systems. Addison Wesley, 1994.
FLUCKIGER, F. Understanding Networked Multimedia: Applications and Technology.
Prentice Hall, 1995. KRISTOF, R.; SATRAN, A. Interactivity by Design: Creating
& Communicating With New Media. Hayden Books, 1995. KUO, F.; EFFELSBERG, W.; GARCIA-LUNA-ACEVES,
J. J. Multimedia Communications: Protocols and Applications. Prentice Hall PTR,
1998. ROESLER, V. Perspectivas em Transmissão Multimídia e TV Digital. Instituto
de Informática, Universidade Federal do Rio Grande do Sul (UFRGS), 2007. STOLFI,
G. Compressão de Imagens, JPEG, PEE647 – Princípios de Televisão Digital. 2002.
VELHO, L. Computação Gráfica e Processamento de Imagens. McGrawHill, 1996. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX435 | ENGENHARIA DA USABILIDADE | 04 | 60 |
| EMENTA | | | |
| Fundamentos de interação humano-computador. Usabilidade. Critérios e recomendações
ergonômicas. Estilos de interação. Metodologias, técnicas e ferramentas de concepção,
projeto e implementação de sistemas interativos. Avaliação de interfaces. Normas
técnicas. Acessibilidade. Usabilidade na Web. | | | |
| OBJETIVO | | | |
| Conhecer os conceitos de interação humano-computador e com as técnicas e métodos
de desenvolvimento e avaliação de interfaces, visando a construção de sistemas
úteis, acessíveis e agradáveis ao usuário final. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| BARBOSA, S. D. J.; SILVA, Bruno S. Interação Humano Computador. Rio de Janeiro:
Elsevier, 2010. CYBIS, W. A.; BETIOL, A. H.; FAUST, R. Ergonomia e usabilidade:
conhecimentos, métodos e aplicações. 2. ed. São Paulo: Novatec, 2010. NIELSEN,
Jakob; LORANGER, Hoa. Usabilidade na Web: projetando websites com qualidade. Rio
de Janeiro: Campus, 2007. PREECE, J.; ROGERS, Y.; SHARP, H. Design de interação:
além da interação homem- computador. Porto Alegre: Bookman, 2005. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| KEATES, S. Designing for Accessibility: A Business Guide to Countering Design
Exclusion. Routledge, 2007. MORAES, A. Design e Avaliação de Interface: ergodesign
e interação homem- computador. 2AB, 2006. ROCHA, Heloisa Vieira; BARANAUSKAS,
Maria Cecilia. Design e avaliação de interfaces humano-computador. Campinas: NIED/UNICAMP,
2003. SHNEIDERMAN, Ben; PLAISANT, Catherine.'
- '| | | |
| OBJETIVO | | | |
| Objetivo conforme tema a ser trabalhado. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano de curso a ser ministrado. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano de curso a ser ministrado. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX1084 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXXIV | 4 | 60 |
| EMENTA | | | |
| Ementa aberta conforme tema a ser trabalhado. | | | |
| OBJETIVO | | | |
| Objetivo conforme tema a ser trabalhado. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano de curso a ser ministrado. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano de curso a ser ministrado. | | | |
**MINISTÉRIO DA EDUCAÇÃO**
**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**
**PRÓ-REITORIA DE GRADUAÇÃO**
**DIRETORIA DE PEDAGÓGICA**| Código | COMPONENTE CURRICULAR | Créditos | Horas
|
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX1085 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXXV | 4 | 60 |
| EMENTA | | | |
| Ementa aberta conforme tema a ser trabalhado. | | | |
| OBJETIVO | | | |
| Objetivo conforme tema a ser trabalhado. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano de curso a ser ministrado. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano de curso a ser ministrado. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX1086 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXXVI | 4 | 60 |
| EMENTA | | | |
| Ementa aberta conforme tema a ser trabalhado. | | | |
| OBJETIVO | | | |
| Objetivo conforme tema a ser trabalhado. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano de curso a ser ministrado. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano de curso a ser ministrado. | | | |
**MINISTÉRIO DA EDUCAÇÃO**
**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**
**PRÓ-REITORIA DE GRADUAÇÃO**
**DIRETORIA DE PEDAGÓGICA**| Código | COMPONENTE CURRICULAR | Créditos | Horas
|
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX1087 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXXVII | 4 | 60 |
| EMENTA | | | |
| Ementa aberta conforme tema a ser trabalhado. | | | |
| OBJETIVO | | | |
| Objetivo conforme tema a ser trabalhado.'
- source_sentence: Qual é a ementa da disciplina GEX657 e quais são as referências
básicas e complementares contempladas no plano de ensino da disciplina?
sentences:
- '| | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| | OPTATIVA II | 04 | 60 |
| EMENTA | | | |
| Deve ser preenchida com o conteúdo de alguma das diversas disciplinas opcionais
propostas, segundo critérios predefinidos pelo Colegiado. | | | |
| OBJETIVO | | | |
| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX657 | TRABALHO DE CONCLUSÃO DE CURSO I | 10 | 150 |
| EMENTA | | | |
| Conhecimento científico. Metodologia de pesquisa científica. Aspectos de escrita
científica. Elaboração de projeto de pesquisa científica na área do conhecimento
de ciência da computação. | | | |
| OBJETIVO | | | |
| Escrever um projeto científico na área de computação. Defender um projeto perante
uma banca. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| BARROS, Adile J. P. de; LEHFELD, Neide A. S. Projeto de Pesquisa: Proposta Metodológica.
Petrópolis-RJ: Vozes, 1990. BASTOS, Lilia da Rocha. Manual para a elaboração de
projetos e relatórios de pesquisa, teses, dissertações e monografias. 5. ed. Rio
de Janeiro: LTC, 2001. ORTIZ, Hilda Beatriz. Cadernos Metodológicos. Chapecó:
Argos, 2001. WAZLAWICK, R. S. Metodologia de Pesquisa para Ciência da Computação.
Rio de Janeiro: Elsevier, 2009. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Hor
as |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Hor as |
| GCH292 | HISTÓRIA DA FRONTEIRA SUL | 04 | 60 |
| EMENTA | | | |
| Construção dos sentidos históricos. Noções de Identidade e de Fronteira. Invenção
das tradições. Processos de povoamento, despovoamento e colonização. Conflitos
econômicos e políticos. Choques culturais no processo de colonização. Questão
indígena, cabocla e afrodescendente. | | | |
| OBJETIVO | | | |
| Compreender o processo de formação da região sul do Brasil por meio da análise
de aspectos históricos do contexto de povoamento, despovoamento e colonização.
| | | |
| REFERÊNCIAS BÁSICAS | | | |
| BARTH, Frederik. Grupos étnicos e suas fronteiras. In: POUTIGNAT, Philippe;
STREIFF- FENART, Jocelyne. Teorias da etnicidade. Seguido de grupos étnicos e
suas fronteiras de Frederik Barth. São Paulo: Editora da UNESP, 1998. p 185-228.
CUCHE, Denys.'
- '11ª Escola de Computação, COPPE/Sistemas, NCE/UFRJ, 1998. SCHILLER, J.; VOISARD,
A. Location-Based Services. Elsevier, 2004. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Hora
s |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Hora s |
| GEX655 | SISTEMAS EMBARCADOS | 4 | 60 |
| EMENTA | | | |
| Conceito de sistemas embarcados, SoCs e MPSoCs. Software e hardware embarcado.
Técnicas de projeto de sistemas embarcados. Modelos de computação. Sistemas operacionais
embarcados. Infraestrutura de comunicação. | | | |
| OBJETIVO | | | |
| Apresentar os diferentes aspectos relacionados às arquiteturas e integração
software (modelos de computação, middleware, sistemas operacionais, tempo real)
e hardware (SoCs, MPSoCs), meios de interconexão e protocolos de comunicação (NoCs)
que propiciem implementações eficientes. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| JANTSCH, Axel. Modeling embedded systems and socs: concurrency and time in models
of computation. San Francisco: Morgan Kaufmann, 2004. JERRYA, Ahmed A. Multiprocessor
systems-on-chips. Amsterdam: Elsevier, 2005. WOLF, Wayne. Computers as components:
principles of embedded computing system design. San Francisco: Morgan Kaufmann,
2001. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| BECKER, Jürgen; HÜBNER, Michael. Multiprocessor System-on-Chip: Hardware Design
and Tool Integration. Springer, 2011. KORNAROS, Georgios. Multi-Core Embedded
Systems. CRC Press, 2010. MURDOCCA, Miles J. Introdução a Arquitetura de Computadores.
Rio de Janeiro: Campus, 2001. PATTERSON, David A.; HENNESSY, John L. Organização
e Projeto de Computadores. Rio de Janeiro: Elsevier, 2005. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX106 | COMPUTAÇÃO DISTRIBUÍDA | 04 | 60 |
| EMENTA | | | |
| Arquitetura de Sistemas Distribuídos. Paradigmas de Computação Distribuída:
Troca de Mensagens, Cliente/Servidor, Comunicação em Grupo, Objetos Distribuídos.
Comunicação entre Processos Distribuídos. Suporte de SO para Computação Distribuída.
Concorrência e Sincronização em Sistemas Distribuídos. Consistência e Replicação
de Dados em Sistemas Distribuídos. Sistemas de Arquivo Distribuídos. Computação
em grade (grid computing). Arquiteturas e modelos de programação paralela. | | | |
| OBJETIVO | | | |
| Projetar e desenvolver sistemas computacionais de natureza distribuída. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| COULOURIS, George F; DOLLIMORE, Jean; KINDBERG, Tim. Sistemas distribuídos:
conceitos e projeto. Porto Alegre: Bookman, 2007. 784 p. LIU, M. L. Distributed
Computing: Principles and Applications. Pearson Addison Wesley, 2004. TANEMBAUM,
A.; STEEN, M. Sistemas distribuídos: princípios e paradigmas. 2. ed. São Paulo:
Pearson, 2008.'
- 'Noções de usabilidade e critérios ergonômicos. | | | |
| OBJETIVO | | | |
| Analisar e projetar sistemas informatizados utilizando metodologia orientada
a objetos. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| BOOCH, G.; RUMBAUGH, J.; JACOBSON, I. UML Guia do Usuário. Rio de Janeiro: Campus,
2006. GUEDES, G. T. A. UML 2 – Uma Abordagem Prática. São Paulo: Novatec, 2011.
LARMAN, C. Utilizando UML e Padrões: uma Introdução à Análise e ao Projeto Orientados
a Objetos. 3. ed. São Paulo: Bookman Companhia, 2007. WAZLAWICK, R. S. Análise
e Projeto de Sistemas de Informação Orientados a Objetos. Rio de Janeiro: Campus,
2004. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| GAMMA, E. et al. Padrões de projeto: soluções reutilizáveis de software orientado
a objetos. Ed. Bookman, 2000. PFLEEGER, S. L. Engenharia de Software. 2. ed. São
Paulo: Prentice-Hall, 2004. PRESSMAN, R. S. Engenharia de Software. 6. ed. São
Paulo: Bookman Companhia, 2006. RUMBAUGH, J.; BLAHA, M. Modelagem e Projetos Baseados
em Objetos. Rio de Janeiro: Campus, 2004. SOMMERVILLE, I. Engenharia de Software.
8. ed. São Paulo: Addison - Wesley, 2008. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX105 | REDES DE COMPUTADORES | 04 | 60 |
| EMENTA | | | |
| Fundamentos de transmissão de dados e sistemas de comunicação. Conceitos, classificação
e topologias das redes. Modelos de referência: OSI e TCP/IP. Protocolos e serviços
de co- municação. Internet e Intranets. Meios de transmissão de dados. Equipamentos
de interco- nexão. Avaliação de desempenho. | | | |
| OBJETIVO | | | |
| Compreender as redes de computadores, conhecendo o modelo de referência OSI,
a arquitetura TCP/IP e seus protocolos, equipamentos e principais aplicações.
| | | |
| REFERÊNCIAS BÁSICAS | | | |
| COMER, Douglas E. Redes de computadores e Internet. 4. ed. Porto Alegre: Bookman,
2007. FOROUZAN, Behrouz A. Comunicação de dados e redes de computadores. McGraw-
Hill - Artmed, 2008. KUROSE, James F.; ROSS, Keith W. Redes de computadores e
a Internet: uma abordagem top-down. 5. ed. São Paulo: Pearson Addison Wesley,
2010. PETERSON, L.; DAVIE, B. Computer networks: a systems approach. 5. ed. Morgan
Kaufmann, 2011. TANENBAUM, Andrew S. Redes de computadores. 5. ed. Rio de Janeiro-RJ:
Campus, 2011. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| COMER, Douglas E. Interligação de redes com TCP/IP. 5. ed. Rio de Janeiro: Campus,
2006. v. 1. DANTAS, Mário. Tecnologias de redes de comunicação e computadores.
Rio de Janeiro: Axcel Books, 2002. HUITEMA, C. Routing in the Internet.'
- source_sentence: Qual o prazo máximo para a entrega da versão final da monografia
ou artigo científico após a aprovação do TCC II, conforme o cronograma?
sentences:
- 'Sintetizar, deduzir, elaborar hipóteses, estabelecer relações e comparações,
detectar contradições, decidir, organizar, expressar-se e argumentar com clareza
e coerência utilizando elementos de linguagem matemática. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| CONNALLY, E. et al. Funções para modelar variações: uma preparação para o cálculo.
Rio de Janeiro: LTC, 2009. DEMANA, D. F. et al. Pré-Cálculo. São Paulo: Addison
Wesley, 2009. DOLCE, O.; POMPEO, J. N. Fundamentos de Matemática Elementar: Geometria
Plana. 8. ed. São Paulo: Atual, 2005. 9 v. _______. Fundamentos de Matemática
Elementar: Geometria Espacial. 6. ed. São Paulo: Atual, 2005. 10 v. DORING, C.
I.; DORING, L. R. Pré-cálculo. Porto Alegre: UFRGS, 2007. IEZZI, G.; MURAKAMI,
C. Fundamentos de matemática elementar: Conjuntos, Funções. 8. ed. São Paulo:
Atual, 2010. 1 v. IEZZI, G.; DOLCE, O.; MURAKAMI, C. Fundamentos de matemática
elementar: Logaritmos. 8. ed. São Paulo: Atual, 2004. 2 v. IEZZI, G. Fundamentos
de matemática elementar: Trigonometria. 8. ed. São Paulo: Atual, 2004. 3 v. MEDEIROS,
V. Z. et al. Pré-Cálculo. 2. ed. São Paulo: Cengage Learning, 2009. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| ANTON, H. Cálculo. 8. ed. São Paulo: Bookman, 2007. 1 v. BARBOSA, J. L. M. Geometria
Euclidiana Plana. Rio de janeiro: SBM, 2000. (Coleção do Professor de Matemática).
FLEMMING, D. M.; GONÇALVES, M. B. Cálculo A. 6. ed. São Paulo: Prentice Hall,
2007. LEITHOLD, L. Cálculo com geometria analítica. 3. ed. São Paulo: HARBRA,
1994. 1 v. LIMA, E. L. et al. A Matemática do Ensino Médio. 6. ed. Rio de Janeiro:
SBM, 2000. 2 v. (Coleção do Professor de Matemática). _______. A matemática do
Ensino Médio. 4. ed. Rio de Janeiro: SBM, 1999. 1 v. (Coleção do Professor de
Matemática). STEWART, J. Cálculo. 6. ed. São Paulo: Cengage Learning, 2010. 1
v. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GCH293 | INTRODUÇÃO À FILOSOFIA | 04 | 60 |
| EMENTA | | | |
| A natureza e especificidade do discurso filosófico e sua relação com outros
campos do conhecimento; principais correntes do pensamento filosófico; Fundamentos
filosóficos da Modernidade. Tópicos de Ética e de Epistemologia. | | | |
| OBJETIVO | | | |
| Refletir criticamente, através de pressupostos éticos e epistemológicos, acerca
da modernidade. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| ABBA, Giuseppe.'
- '18 | Os critérios e as formas de avaliação do estudante, nos componentes curri-
|
| | |
culares TCC I e TCC II serão propostos por uma comissão designada pelo coordenador
do curso e para homologação do Colegiado de Curso. | Parágrafo único. | Após a
homologação, os critérios e as formas de avaliação cons- |
|---|---|
| Parágrafo único. | Após a homologação, os critérios e as formas de avaliação
cons- |
| | |
tarão nos respectivos planos de ensino dos componentes curriculares TCC I e II.
| Art. 19 | Após a aprovação do TCC II, conforme o cronograma, o estudante deverá
|
|---|---|
| Art. 19 | Após a aprovação do TCC II, conforme o cronograma, o estudante deverá
|
| | |
entregar a versão final da monografia (no formato exigido pela biblioteca da UFFS)
em até 30 dias, incorporando as correções e sugestões da banca avaliadora e a
Folha de Aprovação assinada pelos membros da banca. | § 1º. | Antes da entrega
da versão final da monografia, o aluno deve submetê-la à |
|---|---|
| § 1º. | Antes da entrega da versão final da monografia, o aluno deve submetê-la
à |
| | |
apreciação do orientador, o qual deverá verificar se o documento incorpora as
sugestões e correções apontadas pela banca avaliadora. | § 2º. | A aprovação |
no componente curricular TCC II fica | condicionada a entrega da |
|---|---|---|---|
| § 2º. | A aprovação | no componente curricular TCC II fica | condicionada a
entrega da |
| | | | |
versão final da monografia ou artigo, no padrão e normas preconizados pela biblioteca
da UFFS (conforme Mem. 014/2014-PROGRAD), ou conforme publicado, no caso de artigo
científico. **DAS DISPOSIÇÕES GERAIS E TRANSITÓRIAS**| Art. 20 | Os casos omissos
neste |
|---|---|
| Art. 20 | Os casos omissos neste |
| | |
serão decididos pelo respectivo Colegiado de Curso. | Art. 21 | Este | Regulamento
de Trabalho de Conclusão de Curso | do curso de Ciência |
|---|---|---|---|
| Art. 21 | Este | Regulamento de Trabalho de Conclusão de Curso | do curso de
Ciência |
| | | | |
da Computação entra em vigor após a sua aprovação na CGRAD juntamente ao PPC.
**MINISTÉRIO DA EDUCAÇÃO**
**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**
**PRÓ-REITORIA DE GRADUAÇÃO**
**DIRETORIA DE PEDAGÓGICA****APÊNDICE III - REGULAMENTO DE APROVEITAMENTO POR****EQUIVALÊNCIA
DE COMPONENTE CURRICULAR CURSADO****COM APROVAÇÃO OU VALIDADO NA MATRIZ DE 2010
PARA A****MATRIZ DE 2018.****Art.'
- '| | | |
| OBJETIVO | | | |
| Viabilizar a introdução no curso de temas não abordados em outros componentes.
Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da
ciência da computação e/ou aproveitando as experiências de professores ou outros
profissionais qualificados. Esta disciplina também serve para atualizações da
matriz curricular. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX628 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO VI | 4 | 60 |
| EMENTA | | | |
| Conteúdo variável respeitando os objetivos mencionados. | | | |
| OBJETIVO | | | |
| Viabilizar a introdução no curso de temas não abordados em outros componentes.
Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da
ciência da computação e/ou aproveitando as experiências de professores ou outros
profissionais qualificados. Esta disciplina também serve para atualizações da
matriz curricular. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX629 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO VII | 4 | 60 |
| EMENTA | | | |
| Conteúdo variável respeitando os objetivos mencionados. | | | |
| OBJETIVO | | | |
| Viabilizar a introdução no curso de temas não abordados em outros componentes.
Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da
ciência da computação e/ou aproveitando as experiências de professores ou outros
profissionais qualificados. Esta disciplina também serve para atualizações da
matriz curricular. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX630 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO VIII | 4 | 60 |
| EMENTA | | | |
| Conteúdo variável respeitando os objetivos mencionados. | | | |
| OBJETIVO | | | |
| Viabilizar a introdução no curso de temas não abordados em outros componentes.
Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da
ciência da computação e/ou aproveitando as experiências de professores ou outros
profissionais qualificados. Esta disciplina também serve para atualizações da
matriz curricular. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano da disciplina a ser ministrada.'
- source_sentence: Qual foi a função do professor Dilvo Ilvo Ristoff na Universidade
Federal da Fronteira Sul (UFFS) no início de sua implantação?
sentences:
- 'Atividades de extensão possibilitam a formação cidadã, aproximando os acadêmicos
das discussões que integram o universo profissional e social. **MINISTÉRIO DA
EDUCAÇÃO**
**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**
**PRÓ-REITORIA DE GRADUAÇÃO**
**DIRETORIA DE PEDAGÓGICA**b) Formação cidadã: o currículo do curso apresentado
visa levar o estudante ao permanente questionar-se sobre a realidade social que
o cerca e a sua formação enquanto sujeito político pertencente a ela. Esta formação
está caracterizada, principalmente, nos componentes curriculares do Domínio Comum.
c) Compromisso com o desenvolvimento regional sustentável: os processos de ensino-aprendizagem
deverão provocar debate cultural e desenvolvimento regional sustentável nos diferentes
espaços de atuação do profissional da Ciência da Computação. d) Garantia de meios
de acesso e permanência dos acadêmicos na Universidade: as ações da universidade
deverão contribuir para a permanência do estudante na instituição através de bolsas
de extensão, iniciação científica e também de permanência. e) Pluralidade: respeito
às posições ideo-políticas dos estudantes e compromisso com o rigor acadêmico
na apresentação das diferentes correntes para a formação do profissional da Ciência
da Computação. f) Autonomia: construção e afirmação permanente da noção de sujeito
ético-político, seja pelos discentes ou docentes, na participação institucional,
na organização pedagógica e política do curso. g) Gestão participativa, democrática
e transparente: a prática dos docentes vinculados à UFFS caracteriza-se pelo compromisso
com a construção coletiva de seus cursos, a relação profissional pautada na democracia,
com espaços para críticas, diálogos e constantes reavaliações da atuação do grupo.
Também a participação efetiva e o estímulo permanente ao envolvimento discente
com os desafios e deliberações para a melhoria dos cursos e da universidade. h)
Avaliação permanente: a avaliação permanente é realizada de duas formas: (*i*)
avaliação externa de responsabilidade do MEC, que atualmente é constituída pelo
Exame Nacional de Desempenho dos Estudantes (ENADE) e a Avaliação Externa*in loco*,
ambas parte do Sistema Nacional de Avaliação do Ensino Superior (SINAES) e (*ii*)
avaliação interna, também denominada de autoavaliação, coordenada pela Comissão
Própria de Avaliação (CPA) e por instrumentos próprios que contemplem as especificidades
da Universidade, a qual acompanhará a qualidade das atividades desenvolvidas no
curso de graduação em Ciência da Computação e o desempenho dos docentes, segundo
a avaliação dos estudantes. **MINISTÉRIO DA EDUCAÇÃO**
**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**
**PRÓ-REITORIA DE GRADUAÇÃO**
**DIRETORIA DE PEDAGÓGICA*****5.2 Epistemológicos***Comprometido com o processo
de concepção e construção da Instituição, o curso de Ciência da Computação inclui
no desenvolvimento do ensino, pesquisa e extensão, o princípio democrático de
atuação. Este princípio busca “respeitar a pluralidade de pensamento e a diversidade
cultural” como elo entre conhecimento técnico-científico e conhecimento social.
Assim, as bases epistemológicas do curso são fundamentadas em práticas que visam
à constante aquisição do conhecimento e à formação de um profissional capaz de
gerar desenvolvimento e atender às demandas sociais. Para isso, a constante indagação,
o uso de referências apropriadas, o exercício da avaliação crítica são ações essenciais
para a formação acadêmica aliada ao desenvolvimento social.'
- 'A promulgação da lei fez intensificar as atividades de estruturação da nova universidade,
já que havia a meta de iniciar as atividades letivas no primeiro semestre de 2010.
Em 21 de setembro de 2009, o Ministro da Educação designou o professor Dilvo Ilvo
Ristoff para o cargo de reitor*pro tempore*da UFFS, com a incumbência de coordenar
os trabalhos para a implantação da nova universidade, sob a tutoria da Universidade
Federal de Santa Catarina (UFSC). Ainda em 2009 foram realizados os primeiros
concursos e posses de servidores, estruturados os projetos pedagógicos provisórios
dos cursos a serem implantados, definido o processo seletivo para o ingresso dos
primeiros acadêmicos, estabelecidos os locais provisórios de funcionamento e constituída
parte da equipe dirigente que coordenaria os primeiros trabalhos na implantação
da UFFS. No dia 29 de março de 2010 foram iniciadas as aulas nos cinco*Campi*da
UFFS, com o ingresso de 2.160 acadêmicos selecionados com base nas notas do Enem/2009
e com a aplicação da bonificação para os que cursaram o ensino médio em escola
pública. Em cada campus foi realizada programação de recepção aos acadêmicos com
o envolvimento da comunidade interna e externa, visando marcar o primeiro dia
de aula na Universidade. Em um diagnóstico sobre os acadêmicos que ingressaram
na UFFS neste primeiro processo seletivo constatou-se que mais de 90% deles eram
oriundos da Escola Pública de Ensino Médio e que mais de 60% deles representavam
a primeira geração das famílias a acessar o ensino superior. **MINISTÉRIO DA EDUCAÇÃO**
**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**
**PRÓ-REITORIA DE GRADUAÇÃO**
**DIRETORIA DE PEDAGÓGICA**O início das aulas também ensejou o primeiro contato
mais direto dos acadêmicos e dos docentes com os projetos pedagógicos dos cursos
que haviam sido elaborados pela comissão de implantação da Universidade com base
em três grandes eixos: Domínio Comum, Domínio Conexo e Domínio Específico. Os
primeiros contatos foram evidenciando a necessidade de repensar os PPCs, tarefa
que se realizou ao longo dos anos de 2010 e 2011, sob a coordenação dos respectivos
colegiados de curso a fim de serem submetidos à Câmara de Graduação do Conselho
Universitário para aprovação definitiva. Nesta revisão consolidou-se uma concepção
de currículo assentada em um corpo de conhecimentos organizado em três domínios:
Comum, Conexo e Específico, expressos na matriz dos cursos, em componentes curriculares
e outras modalidades de organização do conhecimento. O Domínio Comum visa proporcionar
uma formação crítico-social e introduzir o acadêmico no ambiente universitário.
O Domínio Conexo situa-se na interface entre as áreas de conhecimento, objetivando
a formação e o diálogo interdisciplinar entre diferentes cursos, em cada*campus*.
O Domínio Específico preocupa-se com uma sólida formação profissional. Compreende-se
que os respectivos domínios são princípios articuladores entre o ensino, a pesquisa
e a extensão, fundantes do projeto pedagógico institucional. A organização dos*campi*,
com a constituição de suas equipes dirigentes, a definição dos coordenadores de
curso e a estruturação dos setores essenciais para garantir a funcionalidade do
projeto da Universidade foi um desafio encarado ao longo do primeiro ano de funcionamento.
Iniciava-se aí a trajetória em busca da constituição de uma identidade e de uma
cultura institucional.'
- 'São Paulo: Companhia das Letras, 2007. | | | | |
| | GREMAUD, Amaury; VASCONCELLOS, Marco Antonio; JÚNIOR TONETO, Rudinei. Economia
brasileira contemporânea. 4. ed. São Paulo: Atlas, 2002. | | | | |
| | HUBERMAN, L. História da riqueza do homem. 21. ed. Rio de Janeiro: LTC,
1986. | | | | |
| | IANNI, O. Estado e capitalismo. 2. ed. rev. e ampl. São Paulo: Brasiliense,
1989. | | | | |
| | LEFF, Enrique. Epistemologia ambiental. 2. ed. São Paulo: Cortez, 2002.
| | | | |
| | LÖWY, Michael. Eco-socialismo e planificação democrática. Crítica Marxista,
São Paulo,
UNESP, n. 29, 2009. | |
|---|---|---|
| | LÖWY, Michael. Eco-socialismo e planificação democrática. Crítica Marxista,
São Paulo, UNESP, n. 29, 2009. | |
| | MARX, Karl. O capital: crítica da economia política. 14. ed. Rio de Janeiro:
Bertrand Brasil, 1994. | |
| | NAPOLEONI, Claúdio. Smith, Ricardo e Marx. Rio de Janeiro. 4. ed. Rio de
Janeiro: Graal, 1978. | |
| | PUTNAM, Robert D. Comunidade e democracia, a experiência da Itália moderna.
4. ed. Rio de Janeiro: FGV, 2005. | |
| | SEN, Amartia. Desenvolvimento como Liberdade. São Paulo: Companhia das Letras,
2000. | |
| | SMITH, Adam. Riqueza das nações: Uma investigação sobre a natureza e causas
da riqueza das nações. Curitiba: Hermes, 2001. | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| | OPTATIVA I | 04 | 60 |
| EMENTA | | | |
| Deve ser preenchida com o conteúdo de alguma das diversas disciplinas opcionais
propostas, segundo critérios predefinidos pelo Colegiado. | | | |
| OBJETIVO | | | |
| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| | OPTATIVA II | 04 | 60 |
| EMENTA | | | |
| Deve ser preenchida com o conteúdo de alguma das diversas disciplinas opcionais
propostas, segundo critérios predefinidos pelo Colegiado. | | | |
| OBJETIVO | | | |
| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano de ensino da disciplina a ser ministrada.'
- source_sentence: Qual é a capacidade de armazenamento padrão do disco rígido dos
computadores utilizados no Laboratório de Organização de Computadores e Sistemas
Digitais e qual é a velocidade de rotação mínima do disco rígido?
sentences:
- 'A pesquisa será fomentada através dos grupos de pesquisa do curso em projetos
de iniciação científica. O trabalho de conclusão de curso é outro componente importante
para o desenvolvimento da pesquisa pelo estudante, pois é composto por componentes
curriculares obrigatórios que o conduzem a planejar e executar um projeto de caráter
científico ou tecnológico. Este Projeto Pedagógico apresenta uma matriz curricular
em conformidade com as diretrizes que orientam a estrutura dos cursos da área
da Computação, com a demanda atual por profissionais e com as expectativas regionais
constituídas à luz dos movimentos sociais e dos fundamentos que regem a construção
da UFFS. **8.1 Matriz Curricular**O Curso Ciência da Computação possui duas entradas
anuais oferecidas em turnos diferentes. A primeira entrada, oferecida no primeiro
semestre de cada ano, é no turno vespertino, e a segunda entrada, oferecida no
segundo semestre de cada ano, é no turno noturno. Ambas as ofertas possuem carga
horária, componentes curriculares, ementários e referências iguais, diferindo
apenas no tempo de integralização curricular. A duração para o turno vespertino
é de 8 (oito) semestres e para o turno noturno, 10 (dez) semestres. Para o turno
noturno, destaca-se a distribuição dos componentes curriculares ao longo das duas
últimas fases (nona e décima fases): considerando-se a carga de trabalho exigida
pelos dois trabalhos de conclusão de curso, alocou-se um número reduzido de componentes
curriculares regulares, possibilitando ao aluno trabalhador dedicar-se aos trabalhos
de conclusão durante as duas últimas fases. Adicionalmente, salienta-se o fato
de que os componentes curriculares estarão dispostos em fase, destacando-se que
sempre haverá cinco optativas sendo ofertados, facilitando sua alocação e ampliando
as possibilidades de cumprimento da carga horária optativa. **8.1.1 Matriz curricular
do turno vespertino**| Fase | N.
Ordem | Código | COMPONENTE CURRICULAR | Créditos | Horas | Pré-requisitos |
|---|---|---|---|---|---|---|
| Fase | N.'
- 'Projeto Pedagógico do Curso de Graduação em Ciência da Computação – Bacharelado
**MINISTÉRIO DA EDUCAÇÃO**
**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**
**PRÓ-REITORIA DE GRADUAÇÃO**
**DIRETORIA DE PEDAGÓGICA****14.4 Laboratórios****14.4.2 Laboratórios de Organização
de Computadores e Sistemas Digitais**O objetivo principal do Laboratório de Organização
de Computadores e Sistemas Digitais é dar suporte às atividades teóricas e práticas
das disciplinas de circuitos digitais, sistemas digitais, organização de computadores,
arquitetura de computadores, linguagens de descrição de hardware (optativa) e
tópicos em automação industrial (optativa). Este laboratório deve permitir que
o aluno do curso de Ciência da Computação possa desenvolver atividades práticas
(experimentos e projetos) envolvendo sistemas digitais, tais como: implementar
circuitos combinacionais, implementar circuitos sequenciais, realizar simulações
via software de circuitos combinacionais e sequenciais, realizar prototipagem
de sistemas digitais em PLDs e FPGAs, além da implementação de circuitos inteligentes
através da programação de microcontroladores. Na tabela abaixo são descritos os
materiais necessários à implantação e funcionamento do laboratório. | Tabela 14:
Descrição dos materiais do laboratório: “Laboratório de Organização de Computadores
e Sistemas Digitais” | | |
|---|---|---|
| Laboratório de Organização de Computadores e Sistemas Digitais | | |
| Professor responsável: Luciano Lores Caimi | | |
| Alunos por turma: 25 | | |
| Área: 80 m2 | | Localização: Campus Chapecó |
| Qtd | Descrição | |
| 19 | Bancada com 1 metro por 2 metros com tomadas (8 tomadas) | |
| 30 | Cadeira Giratória com rodas | |
| 3 | Armário (2,0 x 0,8 x 1,80) (L x P x A) | |
| 1 | Quadro branco | |
| 30 | Computadores com processador quad-core frequência de 2.2 GHz e FSB 800
MHz ou superior 4 Mb RAM (memória tipo DIMM SDRAM DDR2 ou superior) Disco rígido
padrão Serial ATA II com capacidade de armazenamento de 400 Gbytes (ou superior)
e rotação 7200 rpm (ou superior) Unidade óptica do tipo DVD-RW Monitor de vídeo
LCD com tela de 19 polegadas, tela do tipo Matriz | |
| | Ativa – TFT LCD, interface digital, dot pitch de 0.264 mm e resolução
gráfica de 1280 x 1024 à 75 Hz
Teclado padrão ABNT-2 com interface USB
Mouse óptico com 3 botões, rolagem e interface USB
Interface de rede Gigabit Ethernet padrão IEEE 802.3x compatível com as
velocidades 10/100/1000, para conexão de cabo de par trançado, com
conectores RJ-45.
Interface para vídeo com alocação dinâmica de memória, que suporte a
resolução de 1900 x 1200 com taxa de atualização de 85 Hz e interface
digital.
02 (duas) interfaces para teclado e mouse padrão USB.
01 (uma) interface serial integrada na placa mãe, com conector de 9 pinos
padrão.
04 (quatro) interfaces USB (além das interfaces para mouse e teclado).
Gabinete tool less |
|---|---|
| | Ativa – TFT LCD, interface digital, dot pitch de 0.264 mm e resolução gráfica
de 1280 x 1024 à 75 Hz Teclado padrão ABNT-2 com interface USB Mouse óptico com
3 botões, rolagem e interface USB Interface de rede Gigabit Ethernet padrão IEEE
802.3x compatível com as velocidades 10/100/1000, para conexão de cabo de par
trançado, com conectores RJ-45.'
- '| | | |
| OBJETIVO | | | |
| Viabilizar a introdução no curso de temas não abordados em outros componentes.
Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da
ciência da computação e/ou aproveitando as experiências de professores ou outros
profissionais qualificados. Esta disciplina também serve para atualizações da
matriz curricular. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX649 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXVII | 2 | 30 |
| EMENTA | | | |
| Conteúdo variável respeitando os objetivos mencionados. | | | |
| OBJETIVO | | | |
| Viabilizar a introdução no curso de temas não abordados em outros componentes.
Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da
ciência da computação e/ou aproveitando as experiências de professores ou outros
profissionais qualificados. Esta disciplina também serve para atualizações da
matriz curricular. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX650 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXVIII | 2 | 30 |
| EMENTA | | | |
| Conteúdo variável respeitando os objetivos mencionados. | | | |
| OBJETIVO | | | |
| Viabilizar a introdução no curso de temas não abordados em outros componentes.
Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da
ciência da computação e/ou aproveitando as experiências de professores ou outros
profissionais qualificados. Esta disciplina também serve para atualizações da
matriz curricular. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| REFERÊNCIAS COMPLEMENTARES | | | |
| Contemplada no plano da disciplina a ser ministrada. | | | |
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
|---|---|---|---|
| Código | COMPONENTE CURRICULAR | Créditos | Horas |
| GEX651 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXIX | 2 | 30 |
| EMENTA | | | |
| Conteúdo variável respeitando os objetivos mencionados. | | | |
| OBJETIVO | | | |
| Viabilizar a introdução no curso de temas não abordados em outros componentes.
Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da
ciência da computação e/ou aproveitando as experiências de professores ou outros
profissionais qualificados. Esta disciplina também serve para atualizações da
matriz curricular. | | | |
| REFERÊNCIAS BÁSICAS | | | |
| Contemplada no plano da disciplina a ser ministrada.'
model-index:
- name: SentenceTransformer based on BAAI/bge-small-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.5895061728395061
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7839506172839507
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8518518518518519
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9135802469135802
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5895061728395061
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2613168724279835
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17037037037037034
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09135802469135801
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5895061728395061
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7839506172839507
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8518518518518519
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9135802469135802
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7532477341933085
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7018285812267293
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7071249444125691
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.5895061728395061
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7839506172839507
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8518518518518519
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9135802469135802
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.5895061728395061
name: Dot Precision@1
- type: dot_precision@3
value: 0.2613168724279835
name: Dot Precision@3
- type: dot_precision@5
value: 0.17037037037037034
name: Dot Precision@5
- type: dot_precision@10
value: 0.09135802469135801
name: Dot Precision@10
- type: dot_recall@1
value: 0.5895061728395061
name: Dot Recall@1
- type: dot_recall@3
value: 0.7839506172839507
name: Dot Recall@3
- type: dot_recall@5
value: 0.8518518518518519
name: Dot Recall@5
- type: dot_recall@10
value: 0.9135802469135802
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7532477341933085
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7018285812267293
name: Dot Mrr@10
- type: dot_map@100
value: 0.7071249444125691
name: Dot Map@100
---
# SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Qual é a capacidade de armazenamento padrão do disco rígido dos computadores utilizados no Laboratório de Organização de Computadores e Sistemas Digitais e qual é a velocidade de rotação mínima do disco rígido?',
'Projeto Pedagógico do Curso de Graduação em Ciência da Computação – Bacharelado **MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA****14.4 Laboratórios****14.4.2 Laboratórios de Organização de Computadores e Sistemas Digitais**O objetivo principal do Laboratório de Organização de Computadores e Sistemas Digitais é dar suporte às atividades teóricas e práticas das disciplinas de circuitos digitais, sistemas digitais, organização de computadores, arquitetura de computadores, linguagens de descrição de hardware (optativa) e tópicos em automação industrial (optativa). Este laboratório deve permitir que o aluno do curso de Ciência da Computação possa desenvolver atividades práticas (experimentos e projetos) envolvendo sistemas digitais, tais como: implementar circuitos combinacionais, implementar circuitos sequenciais, realizar simulações via software de circuitos combinacionais e sequenciais, realizar prototipagem de sistemas digitais em PLDs e FPGAs, além da implementação de circuitos inteligentes através da programação de microcontroladores. Na tabela abaixo são descritos os materiais necessários à implantação e funcionamento do laboratório. | Tabela 14: Descrição dos materiais do laboratório: “Laboratório de Organização de Computadores e Sistemas Digitais” | | |\n|---|---|---|\n| Laboratório de Organização de Computadores e Sistemas Digitais | | |\n| Professor responsável: Luciano Lores Caimi | | |\n| Alunos por turma: 25 | | |\n| Área: 80 m2 | | Localização: Campus Chapecó |\n| Qtd | Descrição | |\n| 19 | Bancada com 1 metro por 2 metros com tomadas (8 tomadas) | |\n| 30 | Cadeira Giratória com rodas | |\n| 3 | Armário (2,0 x 0,8 x 1,80) (L x P x A) | |\n| 1 | Quadro branco | |\n| 30 | Computadores com processador quad-core frequência de 2.2 GHz e FSB 800 MHz ou superior 4 Mb RAM (memória tipo DIMM SDRAM DDR2 ou superior) Disco rígido padrão Serial ATA II com capacidade de armazenamento de 400 Gbytes (ou superior) e rotação 7200 rpm (ou superior) Unidade óptica do tipo DVD-RW Monitor de vídeo LCD com tela de 19 polegadas, tela do tipo Matriz | |\n| | Ativa – TFT LCD, interface digital, dot pitch de 0.264 mm e resolução\ngráfica de 1280 x 1024 à 75 Hz\nTeclado padrão ABNT-2 com interface USB\nMouse óptico com 3 botões, rolagem e interface USB\nInterface de rede Gigabit Ethernet padrão IEEE 802.3x compatível com as\nvelocidades 10/100/1000, para conexão de cabo de par trançado, com\nconectores RJ-45.\nInterface para vídeo com alocação dinâmica de memória, que suporte a\nresolução de 1900 x 1200 com taxa de atualização de 85 Hz e interface\ndigital.\n02 (duas) interfaces para teclado e mouse padrão USB.\n01 (uma) interface serial integrada na placa mãe, com conector de 9 pinos\npadrão.\n04 (quatro) interfaces USB (além das interfaces para mouse e teclado).\nGabinete tool less |\n|---|---|\n| | Ativa – TFT LCD, interface digital, dot pitch de 0.264 mm e resolução gráfica de 1280 x 1024 à 75 Hz Teclado padrão ABNT-2 com interface USB Mouse óptico com 3 botões, rolagem e interface USB Interface de rede Gigabit Ethernet padrão IEEE 802.3x compatível com as velocidades 10/100/1000, para conexão de cabo de par trançado, com conectores RJ-45.',
'| | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX649 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXVII | 2 | 30 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX650 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXVIII | 2 | 30 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX651 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXIX | 2 | 30 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5895 |
| cosine_accuracy@3 | 0.784 |
| cosine_accuracy@5 | 0.8519 |
| cosine_accuracy@10 | 0.9136 |
| cosine_precision@1 | 0.5895 |
| cosine_precision@3 | 0.2613 |
| cosine_precision@5 | 0.1704 |
| cosine_precision@10 | 0.0914 |
| cosine_recall@1 | 0.5895 |
| cosine_recall@3 | 0.784 |
| cosine_recall@5 | 0.8519 |
| cosine_recall@10 | 0.9136 |
| cosine_ndcg@10 | 0.7532 |
| cosine_mrr@10 | 0.7018 |
| **cosine_map@100** | **0.7071** |
| dot_accuracy@1 | 0.5895 |
| dot_accuracy@3 | 0.784 |
| dot_accuracy@5 | 0.8519 |
| dot_accuracy@10 | 0.9136 |
| dot_precision@1 | 0.5895 |
| dot_precision@3 | 0.2613 |
| dot_precision@5 | 0.1704 |
| dot_precision@10 | 0.0914 |
| dot_recall@1 | 0.5895 |
| dot_recall@3 | 0.784 |
| dot_recall@5 | 0.8519 |
| dot_recall@10 | 0.9136 |
| dot_ndcg@10 | 0.7532 |
| dot_mrr@10 | 0.7018 |
| dot_map@100 | 0.7071 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 324 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 324 samples:
| | sentence_0 | sentence_1 |
|:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 47.02 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 206 tokens</li><li>mean: 508.72 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:----------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Qual é a cidade onde a sede da Universidade Federal da Fronteira Sul está localizada?</code> | <code>**MINISTÉRIO DA EDUCAÇÃO**<br>**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**<br>**PRÓ-REITORIA DE GRADUAÇÃO**<br>**DIRETORIA DE PEDAGÓGICA****IDENTIFICAÇÃO INSTITUCIONAL**A Universidade Federal da Fronteira Sul foi criada pela Lei Nº 12.029, de 15 de setembro de 2009. Tem abrangência interestadual com sede na cidade catarinense de Chapecó, três*campi*no Rio Grande do Sul – Cerro Largo, Erechim e Passo Fundo – e dois *campi*no Paraná – Laranjeiras do Sul e Realeza. **Endereço da Reitoria:**<br>Avenida Fernando Machado, 108 E <br>Bairro Centro – CEP 89802-112 – Chapecó-SC.</code> |
| <code>Qual o número da lei que criou a Universidade Federal da Fronteira Sul?</code> | <code>**MINISTÉRIO DA EDUCAÇÃO**<br>**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**<br>**PRÓ-REITORIA DE GRADUAÇÃO**<br>**DIRETORIA DE PEDAGÓGICA****IDENTIFICAÇÃO INSTITUCIONAL**A Universidade Federal da Fronteira Sul foi criada pela Lei Nº 12.029, de 15 de setembro de 2009. Tem abrangência interestadual com sede na cidade catarinense de Chapecó, três*campi*no Rio Grande do Sul – Cerro Largo, Erechim e Passo Fundo – e dois *campi*no Paraná – Laranjeiras do Sul e Realeza. **Endereço da Reitoria:**<br>Avenida Fernando Machado, 108 E <br>Bairro Centro – CEP 89802-112 – Chapecó-SC.</code> |
| <code>Qual é o nome do reitor da Universidade Federal da Fronteira Sul (UFFS) e qual cidade é a sede da reitoria da instituição?</code> | <code>Tem abrangência interestadual com sede na cidade catarinense de Chapecó, três*campi*no Rio Grande do Sul – Cerro Largo, Erechim e Passo Fundo – e dois *campi*no Paraná – Laranjeiras do Sul e Realeza. **Endereço da Reitoria:**<br>Avenida Fernando Machado, 108 E <br>Bairro Centro – CEP 89802-112 – Chapecó-SC. **Reitor**: Jaime Giolo **Vice-Reitor**: Antonio Inácio Andrioli **Pró-Reitor de Graduação**: João Alfredo Braida **Pró-Reitor de Pesquisa e Pós-Graduação**: Joviles Vitório Trevisol **Pró-Reitor de Extensão e Cultura:**Émerson Neves da Silva **Pró-Reitor de Administração e Infraestrutura:**Péricles Luiz Brustolin **Pró-Reitor de Planejamento**: Charles Albino Schultz **Pró-Reitor de Assuntos Estudantis:**Darlan Christiano Kroth **Pró-Reitor de Gestão de Pessoas:**Marcelo Recktenvald **Dirigentes de Chapecó-SC**Diretora de*Campus*: Lísia Regina Ferreira Michels Coordenadora Administrativa: Ana Cláudia Lara Prado Coordenador Acadêmico: Rosane Rossato Binotto **Dirigentes de Cerro Largo-RS**Diretor de*Campus*: Ivann Carlos Lago Coordenador Administrativo: Sandro Adriano Schneider Coordenadora Acadêmica: Lauren Lúcia Zamin **Dirigentes de Erechim-RS**Diretor de*Campus*: Anderson Andre Genro Alves Ribeiro Coordenador Administrativo: Guilhermo Romero Coordenadora Acadêmica: Juçara Spinelli Projeto Pedagógico do Curso de Graduação em Ciência da Computação – Bacharelado **Dirigentes de Laranjeiras do Sul-PR**Diretora de*Campus*: Janete Stoffel, Coordenador Administrativo: Sandro Neckel da Silva Coordenadora Acadêmica: Katia Aparecida Seganfredo **Índice**1 DADOS GERAIS DO CURSO...............................................................................................5 2 HISTÓRICO INSTITUCIONAL.............................................................................................7 3 EQUIPE DE COORDENAÇÃO E DE ELABORAÇÃO DO PPC......................................14 4 JUSTIFICATIVA....................................................................................................................16 5 REFERENCIAIS ORIENTADORES (Ético-políticos, Epistemológicos, Metodológicos e <br>Legais).......................................................................................................................................18 6 OBJETIVOS DO CURSO.....................................................................................................23 7 PERFIL DO EGRESSO.........................................................................................................24 8 ORGANIZAÇÃO CURRICULAR........................................................................................26 9 PROCESSO PEDAGÓGICO E DE GESTÃO DO CURSO E PROCESSO DE <br>AVALIAÇÃO DO ENSINO-APRENDIZAGEM...................................................................164 10 AUTOAVALIAÇÃO DO CURSO.....................................................................................168 11 ARTICULAÇÃO ENTRE ENSINO, PESQUISA E EXTENSÃO....................................170 12 PERFIL DOCENTE (competências, habilidades, comprometimento, entre outros) E <br>PROCESSO DE QUALIFICAÇÃO.......................................................................................172 13 QUADRO DE PESSOAL DOCENTE..............................................................................174 14 INFRAESTRUTURA NECESSÁRIA AO CURSO..........................................................178 15 APÊNDICES......................................................................................................................198 APÊNDICE I - REGULAMENTO DAS ATIVIDADES CURRICULARES <br>COMPLEMENTARES DO CURSO DE GRADUAÇÃO EM CIÊNCIA DA COMPUTAÇÃO <br>- BACHARELADO................................................................................................................198 APÊNDICE II - REGULAMENTO DE TRABALHO DE CONCLUSÃO DO CURSO DE <br>GRADUAÇÃO EM CIÊNCIA DA COMPUTAÇÃO - BACHARELADO..........................210 APÊNDICE III - REGULAMENTO DE APROVEITAMENTO POR EQUIVALÊNCIA DE <br>COMPONENTE CURRICULAR CURSADO COM APROVAÇÃO OU VALIDADO NA <br>MATRIZ DE 2010 PARA A MATRIZ DE 2018.....................................................................226 **MINISTÉRIO DA EDUCAÇÃO**<br>**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**<br>**PRÓ-REITORIA DE GRADUAÇÃO**<br>**DIRETORIA DE PEDAGÓGICA****1 DADOS GERAIS DO CURSO**1.1.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_map@100 |
|:------:|:----:|:--------------:|
| 1.0 | 33 | 0.5410 |
| 1.5152 | 50 | 0.5947 |
| 2.0 | 66 | 0.6343 |
| 3.0 | 99 | 0.6711 |
| 3.0303 | 100 | 0.6735 |
| 4.0 | 132 | 0.7057 |
| 4.5455 | 150 | 0.7044 |
| 5.0 | 165 | 0.7071 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Qual é a capacidade de armazenamento padrão do disco rígido dos computadores utilizados no Laboratório de Organização de Computadores e Sistemas Digitais e qual é a velocidade de rotação mínima do disco rígido?',
'Projeto Pedagógico do Curso de Graduação em Ciência da Computação – Bacharelado **MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA****14.4 Laboratórios****14.4.2 Laboratórios de Organização de Computadores e Sistemas Digitais**O objetivo principal do Laboratório de Organização de Computadores e Sistemas Digitais é dar suporte às atividades teóricas e práticas das disciplinas de circuitos digitais, sistemas digitais, organização de computadores, arquitetura de computadores, linguagens de descrição de hardware (optativa) e tópicos em automação industrial (optativa). Este laboratório deve permitir que o aluno do curso de Ciência da Computação possa desenvolver atividades práticas (experimentos e projetos) envolvendo sistemas digitais, tais como: implementar circuitos combinacionais, implementar circuitos sequenciais, realizar simulações via software de circuitos combinacionais e sequenciais, realizar prototipagem de sistemas digitais em PLDs e FPGAs, além da implementação de circuitos inteligentes através da programação de microcontroladores. Na tabela abaixo são descritos os materiais necessários à implantação e funcionamento do laboratório. | Tabela 14: Descrição dos materiais do laboratório: “Laboratório de Organização de Computadores e Sistemas Digitais” | | |\n|---|---|---|\n| Laboratório de Organização de Computadores e Sistemas Digitais | | |\n| Professor responsável: Luciano Lores Caimi | | |\n| Alunos por turma: 25 | | |\n| Área: 80 m2 | | Localização: Campus Chapecó |\n| Qtd | Descrição | |\n| 19 | Bancada com 1 metro por 2 metros com tomadas (8 tomadas) | |\n| 30 | Cadeira Giratória com rodas | |\n| 3 | Armário (2,0 x 0,8 x 1,80) (L x P x A) | |\n| 1 | Quadro branco | |\n| 30 | Computadores com processador quad-core frequência de 2.2 GHz e FSB 800 MHz ou superior 4 Mb RAM (memória tipo DIMM SDRAM DDR2 ou superior) Disco rígido padrão Serial ATA II com capacidade de armazenamento de 400 Gbytes (ou superior) e rotação 7200 rpm (ou superior) Unidade óptica do tipo DVD-RW Monitor de vídeo LCD com tela de 19 polegadas, tela do tipo Matriz | |\n| | Ativa – TFT LCD, interface digital, dot pitch de 0.264 mm e resolução\ngráfica de 1280 x 1024 à 75 Hz\nTeclado padrão ABNT-2 com interface USB\nMouse óptico com 3 botões, rolagem e interface USB\nInterface de rede Gigabit Ethernet padrão IEEE 802.3x compatível com as\nvelocidades 10/100/1000, para conexão de cabo de par trançado, com\nconectores RJ-45.\nInterface para vídeo com alocação dinâmica de memória, que suporte a\nresolução de 1900 x 1200 com taxa de atualização de 85 Hz e interface\ndigital.\n02 (duas) interfaces para teclado e mouse padrão USB.\n01 (uma) interface serial integrada na placa mãe, com conector de 9 pinos\npadrão.\n04 (quatro) interfaces USB (além das interfaces para mouse e teclado).\nGabinete tool less |\n|---|---|\n| | Ativa – TFT LCD, interface digital, dot pitch de 0.264 mm e resolução gráfica de 1280 x 1024 à 75 Hz Teclado padrão ABNT-2 com interface USB Mouse óptico com 3 botões, rolagem e interface USB Interface de rede Gigabit Ethernet padrão IEEE 802.3x compatível com as velocidades 10/100/1000, para conexão de cabo de par trançado, com conectores RJ-45.',
'| | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX649 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXVII | 2 | 30 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX650 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXVIII | 2 | 30 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX651 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXIX | 2 | 30 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5895 |
| cosine_accuracy@3 | 0.784 |
| cosine_accuracy@5 | 0.8519 |
| cosine_accuracy@10 | 0.9136 |
| cosine_precision@1 | 0.5895 |
| cosine_precision@3 | 0.2613 |
| cosine_precision@5 | 0.1704 |
| cosine_precision@10 | 0.0914 |
| cosine_recall@1 | 0.5895 |
| cosine_recall@3 | 0.784 |
| cosine_recall@5 | 0.8519 |
| cosine_recall@10 | 0.9136 |
| cosine_ndcg@10 | 0.7532 |
| cosine_mrr@10 | 0.7018 |
| **cosine_map@100** | **0.7071** |
| dot_accuracy@1 | 0.5895 |
| dot_accuracy@3 | 0.784 |
| dot_accuracy@5 | 0.8519 |
| dot_accuracy@10 | 0.9136 |
| dot_precision@1 | 0.5895 |
| dot_precision@3 | 0.2613 |
| dot_precision@5 | 0.1704 |
| dot_precision@10 | 0.0914 |
| dot_recall@1 | 0.5895 |
| dot_recall@3 | 0.784 |
| dot_recall@5 | 0.8519 |
| dot_recall@10 | 0.9136 |
| dot_ndcg@10 | 0.7532 |
| dot_mrr@10 | 0.7018 |
| dot_map@100 | 0.7071 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 324 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 324 samples:
| | sentence_0 | sentence_1 |
|:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 47.02 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 206 tokens</li><li>mean: 508.72 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:----------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Qual é a cidade onde a sede da Universidade Federal da Fronteira Sul está localizada?</code> | <code>**MINISTÉRIO DA EDUCAÇÃO**<br>**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**<br>**PRÓ-REITORIA DE GRADUAÇÃO**<br>**DIRETORIA DE PEDAGÓGICA****IDENTIFICAÇÃO INSTITUCIONAL**A Universidade Federal da Fronteira Sul foi criada pela Lei Nº 12.029, de 15 de setembro de 2009. Tem abrangência interestadual com sede na cidade catarinense de Chapecó, três*campi*no Rio Grande do Sul – Cerro Largo, Erechim e Passo Fundo – e dois *campi*no Paraná – Laranjeiras do Sul e Realeza. **Endereço da Reitoria:**<br>Avenida Fernando Machado, 108 E <br>Bairro Centro – CEP 89802-112 – Chapecó-SC.</code> |
| <code>Qual o número da lei que criou a Universidade Federal da Fronteira Sul?</code> | <code>**MINISTÉRIO DA EDUCAÇÃO**<br>**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**<br>**PRÓ-REITORIA DE GRADUAÇÃO**<br>**DIRETORIA DE PEDAGÓGICA****IDENTIFICAÇÃO INSTITUCIONAL**A Universidade Federal da Fronteira Sul foi criada pela Lei Nº 12.029, de 15 de setembro de 2009. Tem abrangência interestadual com sede na cidade catarinense de Chapecó, três*campi*no Rio Grande do Sul – Cerro Largo, Erechim e Passo Fundo – e dois *campi*no Paraná – Laranjeiras do Sul e Realeza. **Endereço da Reitoria:**<br>Avenida Fernando Machado, 108 E <br>Bairro Centro – CEP 89802-112 – Chapecó-SC.</code> |
| <code>Qual é o nome do reitor da Universidade Federal da Fronteira Sul (UFFS) e qual cidade é a sede da reitoria da instituição?</code> | <code>Tem abrangência interestadual com sede na cidade catarinense de Chapecó, três*campi*no Rio Grande do Sul – Cerro Largo, Erechim e Passo Fundo – e dois *campi*no Paraná – Laranjeiras do Sul e Realeza. **Endereço da Reitoria:**<br>Avenida Fernando Machado, 108 E <br>Bairro Centro – CEP 89802-112 – Chapecó-SC. **Reitor**: Jaime Giolo **Vice-Reitor**: Antonio Inácio Andrioli **Pró-Reitor de Graduação**: João Alfredo Braida **Pró-Reitor de Pesquisa e Pós-Graduação**: Joviles Vitório Trevisol **Pró-Reitor de Extensão e Cultura:**Émerson Neves da Silva **Pró-Reitor de Administração e Infraestrutura:**Péricles Luiz Brustolin **Pró-Reitor de Planejamento**: Charles Albino Schultz **Pró-Reitor de Assuntos Estudantis:**Darlan Christiano Kroth **Pró-Reitor de Gestão de Pessoas:**Marcelo Recktenvald **Dirigentes de Chapecó-SC**Diretora de*Campus*: Lísia Regina Ferreira Michels Coordenadora Administrativa: Ana Cláudia Lara Prado Coordenador Acadêmico: Rosane Rossato Binotto **Dirigentes de Cerro Largo-RS**Diretor de*Campus*: Ivann Carlos Lago Coordenador Administrativo: Sandro Adriano Schneider Coordenadora Acadêmica: Lauren Lúcia Zamin **Dirigentes de Erechim-RS**Diretor de*Campus*: Anderson Andre Genro Alves Ribeiro Coordenador Administrativo: Guilhermo Romero Coordenadora Acadêmica: Juçara Spinelli Projeto Pedagógico do Curso de Graduação em Ciência da Computação – Bacharelado **Dirigentes de Laranjeiras do Sul-PR**Diretora de*Campus*: Janete Stoffel, Coordenador Administrativo: Sandro Neckel da Silva Coordenadora Acadêmica: Katia Aparecida Seganfredo **Índice**1 DADOS GERAIS DO CURSO...............................................................................................5 2 HISTÓRICO INSTITUCIONAL.............................................................................................7 3 EQUIPE DE COORDENAÇÃO E DE ELABORAÇÃO DO PPC......................................14 4 JUSTIFICATIVA....................................................................................................................16 5 REFERENCIAIS ORIENTADORES (Ético-políticos, Epistemológicos, Metodológicos e <br>Legais).......................................................................................................................................18 6 OBJETIVOS DO CURSO.....................................................................................................23 7 PERFIL DO EGRESSO.........................................................................................................24 8 ORGANIZAÇÃO CURRICULAR........................................................................................26 9 PROCESSO PEDAGÓGICO E DE GESTÃO DO CURSO E PROCESSO DE <br>AVALIAÇÃO DO ENSINO-APRENDIZAGEM...................................................................164 10 AUTOAVALIAÇÃO DO CURSO.....................................................................................168 11 ARTICULAÇÃO ENTRE ENSINO, PESQUISA E EXTENSÃO....................................170 12 PERFIL DOCENTE (competências, habilidades, comprometimento, entre outros) E <br>PROCESSO DE QUALIFICAÇÃO.......................................................................................172 13 QUADRO DE PESSOAL DOCENTE..............................................................................174 14 INFRAESTRUTURA NECESSÁRIA AO CURSO..........................................................178 15 APÊNDICES......................................................................................................................198 APÊNDICE I - REGULAMENTO DAS ATIVIDADES CURRICULARES <br>COMPLEMENTARES DO CURSO DE GRADUAÇÃO EM CIÊNCIA DA COMPUTAÇÃO <br>- BACHARELADO................................................................................................................198 APÊNDICE II - REGULAMENTO DE TRABALHO DE CONCLUSÃO DO CURSO DE <br>GRADUAÇÃO EM CIÊNCIA DA COMPUTAÇÃO - BACHARELADO..........................210 APÊNDICE III - REGULAMENTO DE APROVEITAMENTO POR EQUIVALÊNCIA DE <br>COMPONENTE CURRICULAR CURSADO COM APROVAÇÃO OU VALIDADO NA <br>MATRIZ DE 2010 PARA A MATRIZ DE 2018.....................................................................226 **MINISTÉRIO DA EDUCAÇÃO**<br>**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**<br>**PRÓ-REITORIA DE GRADUAÇÃO**<br>**DIRETORIA DE PEDAGÓGICA****1 DADOS GERAIS DO CURSO**1.1.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_map@100 |
|:------:|:----:|:--------------:|
| 1.0 | 33 | 0.5410 |
| 1.5152 | 50 | 0.5947 |
| 2.0 | 66 | 0.6343 |
| 3.0 | 99 | 0.6711 |
| 3.0303 | 100 | 0.6735 |
| 4.0 | 132 | 0.7057 |
| 4.5455 | 150 | 0.7044 |
| 5.0 | 165 | 0.7071 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-small-en-v1.5", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100", "dot_accuracy@1", "dot_accuracy@3", "dot_accuracy@5", "dot_accuracy@10", "dot_precision@1", "dot_precision@3", "dot_precision@5", "dot_precision@10", "dot_recall@1", "dot_recall@3", "dot_recall@5", "dot_recall@10", "dot_ndcg@10", "dot_mrr@10", "dot_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:324", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Qual tipo de documento deve ser incluído como anexo na compilação de TCC II, conforme o Artigo 16º, e qual é a condição para que ele seja aceito?", "sentences": ["**Art. 13º**As modalidades descritas na Tabela 1 são regidas conforme o seguinte: **§ 1º**Pesquisa científica básica ou aplicada refere-se a atividades realizadas no contexto \nde aplicação de método científico para investigação de um problema relevante para a \nárea e suas interdisciplinaridades. Projeto Pedagógico do Curso de Graduação em Ciência da Computação – Bacharelado 213 **§ 2º**Inovação e Negócios refere-se à atividades realizadas pelo aluno no contexto Indus- \ntrial e Mercadológico, seja esse baseado em empreendedorismo, inovação ou no meio \nempresarial, desde que gere um produto entregável. **§ 3º**Comprovação de experiências refere-se às atividades realizadas pelo aluno durante \no curso que comprovem um substancial crescimento profissional e a contribuição no \nmeio na qual a atividade foi conduzida. **§ 4º**As modalidades poderão ser regulamentadas pelo Colegiado de Curso em ato deli- \nberativo. **Art. 14º**A monografia é válida como TCC II conforme o seguinte: **§ 1º**Incluir o tema, a problematização, os objetivos gerais e específicos, a justificativa, \numa abrangente revisão bibliográfica do estado da arte do tema escolhido, os procedi- \nmentos metodológicos, os resultados, as discussões, o cronograma e o orçamento do \nprojeto, quando aplicável. **§ 2º**Respeitando-se os termos de direitos autorais em vigor, deve ser integrada ao acervo \nde trabalhos digitais da UFFS e ficar disponível à comunidade acadêmica como forma \nde divulgação do conhecimento adquirido e formado. **§ 3º**Seguir as normas metodológicas preconizadas pela UFFS. **Art. 15º**O artigo científico é válido como TCC II conforme o seguinte: **§ 1º**Diagramado em formato de conferência ou periódico reconhecido pela SBC, na es- \ntrutura de duas colunas, com no mínimo 8 (oito) páginas. **§ 2º**O proponente deve ser um dos autores do artigo científico. **§ 3º**Ser endossado pelo professor-orientador do TCC. **§ 4º**Respeitando-se os termos de direitos autorais em vigor, deve ser integrado ao acerto \nde trabalhos digitais da UFFS. **§ 5º**No caso de artigo científico aceito para publicação em periódico ou conferência re- \nconhecidos com Qualis na área, antes da data de sua apresentação à banca, ficam sus- \npensas as obrigações do §1º do Art. 11º. **§ 6º**Não estar vinculado às modalidades Inovação e Negócio, e Comprovação de Expe- \nriências. **Art. 16º**A compilação é válida como TCC II conforme o seguinte: Projeto Pedagógico do Curso de Graduação em Ciência da Computação – Bacharelado 214 **MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA****§ 1º**Incluir introdução, procedimentos metodológicos, resultados e conclusões referen- \ntes ao tema abordado. **§ 2º**Apresentar outros documentos, como relatórios, artigos resumidos, manuais, etc, no \nformato de anexos. **§ 3º**Cada documento anexo à compilação deve ser reconhecido e aceito como de nature- \nza relevante e válido pelo Colegiado do Curso ou pela Comissão de Atividades Curricu- \nlares Complementares. **§ 4º**Cada documento anexo na compilação deve ser contextualizado com um texto que \nexplica como seus desdobramentos se conectam na contribuição do trabalho.", "Addison-Wesley Publishing, 2000. KUO, Franklin F. Multimedia Communications: Protocols & Applications. Prentice Hall, 1997. PAULA FILHO, W. de P. Multimídia: Conceitos e Aplicações. Rio de Janeiro: LTC Editora, 2000. VAUGHAN, T. Multimedia Making it Work. McGraw-Hill, 2001. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| AGNEW, P. W.; KELLERMAN, A. S. Distributed Multimedia: Technologies, Applications, and Opportunities in the Digital Information Industry. A Guide for Users and Providers. Addison Wesley, 1996. BUFORD, J. F. K. Multimedia Systems. Addison Wesley, 1994. FLUCKIGER, F. Understanding Networked Multimedia: Applications and Technology. Prentice Hall, 1995. KRISTOF, R.; SATRAN, A. Interactivity by Design: Creating & Communicating With New Media. Hayden Books, 1995. KUO, F.; EFFELSBERG, W.; GARCIA-LUNA-ACEVES, J. J. Multimedia Communications: Protocols and Applications. Prentice Hall PTR, 1998. ROESLER, V. Perspectivas em Transmissão Multimídia e TV Digital. Instituto de Informática, Universidade Federal do Rio Grande do Sul (UFRGS), 2007. STOLFI, G. Compressão de Imagens, JPEG, PEE647 – Princípios de Televisão Digital. 2002. VELHO, L. Computação Gráfica e Processamento de Imagens. McGrawHill, 1996. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX435 | ENGENHARIA DA USABILIDADE | 04 | 60 |\n| EMENTA | | | |\n| Fundamentos de interação humano-computador. Usabilidade. Critérios e recomendações ergonômicas. Estilos de interação. Metodologias, técnicas e ferramentas de concepção, projeto e implementação de sistemas interativos. Avaliação de interfaces. Normas técnicas. Acessibilidade. Usabilidade na Web. | | | |\n| OBJETIVO | | | |\n| Conhecer os conceitos de interação humano-computador e com as técnicas e métodos de desenvolvimento e avaliação de interfaces, visando a construção de sistemas úteis, acessíveis e agradáveis ao usuário final. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| BARBOSA, S. D. J.; SILVA, Bruno S. Interação Humano Computador. Rio de Janeiro: Elsevier, 2010. CYBIS, W. A.; BETIOL, A. H.; FAUST, R. Ergonomia e usabilidade: conhecimentos, métodos e aplicações. 2. ed. São Paulo: Novatec, 2010. NIELSEN, Jakob; LORANGER, Hoa. Usabilidade na Web: projetando websites com qualidade. Rio de Janeiro: Campus, 2007. PREECE, J.; ROGERS, Y.; SHARP, H. Design de interação: além da interação homem- computador. Porto Alegre: Bookman, 2005. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| KEATES, S. Designing for Accessibility: A Business Guide to Countering Design Exclusion. Routledge, 2007. MORAES, A. Design e Avaliação de Interface: ergodesign e interação homem- computador. 2AB, 2006. ROCHA, Heloisa Vieira; BARANAUSKAS, Maria Cecilia. Design e avaliação de interfaces humano-computador. Campinas: NIED/UNICAMP, 2003. SHNEIDERMAN, Ben; PLAISANT, Catherine.", "| | | |\n| OBJETIVO | | | |\n| Objetivo conforme tema a ser trabalhado. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano de curso a ser ministrado. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano de curso a ser ministrado. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX1084 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXXIV | 4 | 60 |\n| EMENTA | | | |\n| Ementa aberta conforme tema a ser trabalhado. | | | |\n| OBJETIVO | | | |\n| Objetivo conforme tema a ser trabalhado. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano de curso a ser ministrado. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano de curso a ser ministrado. | | | |\n**MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA**| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX1085 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXXV | 4 | 60 |\n| EMENTA | | | |\n| Ementa aberta conforme tema a ser trabalhado. | | | |\n| OBJETIVO | | | |\n| Objetivo conforme tema a ser trabalhado. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano de curso a ser ministrado. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano de curso a ser ministrado. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX1086 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXXVI | 4 | 60 |\n| EMENTA | | | |\n| Ementa aberta conforme tema a ser trabalhado. | | | |\n| OBJETIVO | | | |\n| Objetivo conforme tema a ser trabalhado. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano de curso a ser ministrado. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano de curso a ser ministrado. | | | |\n**MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA**| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX1087 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXXVII | 4 | 60 |\n| EMENTA | | | |\n| Ementa aberta conforme tema a ser trabalhado. | | | |\n| OBJETIVO | | | |\n| Objetivo conforme tema a ser trabalhado."]}, {"source_sentence": "Qual é a ementa da disciplina GEX657 e quais são as referências básicas e complementares contempladas no plano de ensino da disciplina?", "sentences": ["| | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| | OPTATIVA II | 04 | 60 |\n| EMENTA | | | |\n| Deve ser preenchida com o conteúdo de alguma das diversas disciplinas opcionais propostas, segundo critérios predefinidos pelo Colegiado. | | | |\n| OBJETIVO | | | |\n| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX657 | TRABALHO DE CONCLUSÃO DE CURSO I | 10 | 150 |\n| EMENTA | | | |\n| Conhecimento científico. Metodologia de pesquisa científica. Aspectos de escrita científica. Elaboração de projeto de pesquisa científica na área do conhecimento de ciência da computação. | | | |\n| OBJETIVO | | | |\n| Escrever um projeto científico na área de computação. Defender um projeto perante uma banca. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| BARROS, Adile J. P. de; LEHFELD, Neide A. S. Projeto de Pesquisa: Proposta Metodológica. Petrópolis-RJ: Vozes, 1990. BASTOS, Lilia da Rocha. Manual para a elaboração de projetos e relatórios de pesquisa, teses, dissertações e monografias. 5. ed. Rio de Janeiro: LTC, 2001. ORTIZ, Hilda Beatriz. Cadernos Metodológicos. Chapecó: Argos, 2001. WAZLAWICK, R. S. Metodologia de Pesquisa para Ciência da Computação. Rio de Janeiro: Elsevier, 2009. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Hor\nas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Hor as |\n| GCH292 | HISTÓRIA DA FRONTEIRA SUL | 04 | 60 |\n| EMENTA | | | |\n| Construção dos sentidos históricos. Noções de Identidade e de Fronteira. Invenção das tradições. Processos de povoamento, despovoamento e colonização. Conflitos econômicos e políticos. Choques culturais no processo de colonização. Questão indígena, cabocla e afrodescendente. | | | |\n| OBJETIVO | | | |\n| Compreender o processo de formação da região sul do Brasil por meio da análise de aspectos históricos do contexto de povoamento, despovoamento e colonização. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| BARTH, Frederik. Grupos étnicos e suas fronteiras. In: POUTIGNAT, Philippe; STREIFF- FENART, Jocelyne. Teorias da etnicidade. Seguido de grupos étnicos e suas fronteiras de Frederik Barth. São Paulo: Editora da UNESP, 1998. p 185-228. CUCHE, Denys.", "11ª Escola de Computação, COPPE/Sistemas, NCE/UFRJ, 1998. SCHILLER, J.; VOISARD, A. Location-Based Services. Elsevier, 2004. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Hora\ns |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Hora s |\n| GEX655 | SISTEMAS EMBARCADOS | 4 | 60 |\n| EMENTA | | | |\n| Conceito de sistemas embarcados, SoCs e MPSoCs. Software e hardware embarcado. Técnicas de projeto de sistemas embarcados. Modelos de computação. Sistemas operacionais embarcados. Infraestrutura de comunicação. | | | |\n| OBJETIVO | | | |\n| Apresentar os diferentes aspectos relacionados às arquiteturas e integração software (modelos de computação, middleware, sistemas operacionais, tempo real) e hardware (SoCs, MPSoCs), meios de interconexão e protocolos de comunicação (NoCs) que propiciem implementações eficientes. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| JANTSCH, Axel. Modeling embedded systems and socs: concurrency and time in models of computation. San Francisco: Morgan Kaufmann, 2004. JERRYA, Ahmed A. Multiprocessor systems-on-chips. Amsterdam: Elsevier, 2005. WOLF, Wayne. Computers as components: principles of embedded computing system design. San Francisco: Morgan Kaufmann, 2001. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| BECKER, Jürgen; HÜBNER, Michael. Multiprocessor System-on-Chip: Hardware Design and Tool Integration. Springer, 2011. KORNAROS, Georgios. Multi-Core Embedded Systems. CRC Press, 2010. MURDOCCA, Miles J. Introdução a Arquitetura de Computadores. Rio de Janeiro: Campus, 2001. PATTERSON, David A.; HENNESSY, John L. Organização e Projeto de Computadores. Rio de Janeiro: Elsevier, 2005. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX106 | COMPUTAÇÃO DISTRIBUÍDA | 04 | 60 |\n| EMENTA | | | |\n| Arquitetura de Sistemas Distribuídos. Paradigmas de Computação Distribuída: Troca de Mensagens, Cliente/Servidor, Comunicação em Grupo, Objetos Distribuídos. Comunicação entre Processos Distribuídos. Suporte de SO para Computação Distribuída. Concorrência e Sincronização em Sistemas Distribuídos. Consistência e Replicação de Dados em Sistemas Distribuídos. Sistemas de Arquivo Distribuídos. Computação em grade (grid computing). Arquiteturas e modelos de programação paralela. | | | |\n| OBJETIVO | | | |\n| Projetar e desenvolver sistemas computacionais de natureza distribuída. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| COULOURIS, George F; DOLLIMORE, Jean; KINDBERG, Tim. Sistemas distribuídos: conceitos e projeto. Porto Alegre: Bookman, 2007. 784 p. LIU, M. L. Distributed Computing: Principles and Applications. Pearson Addison Wesley, 2004. TANEMBAUM, A.; STEEN, M. Sistemas distribuídos: princípios e paradigmas. 2. ed. São Paulo: Pearson, 2008.", "Noções de usabilidade e critérios ergonômicos. | | | |\n| OBJETIVO | | | |\n| Analisar e projetar sistemas informatizados utilizando metodologia orientada a objetos. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| BOOCH, G.; RUMBAUGH, J.; JACOBSON, I. UML Guia do Usuário. Rio de Janeiro: Campus, 2006. GUEDES, G. T. A. UML 2 – Uma Abordagem Prática. São Paulo: Novatec, 2011. LARMAN, C. Utilizando UML e Padrões: uma Introdução à Análise e ao Projeto Orientados a Objetos. 3. ed. São Paulo: Bookman Companhia, 2007. WAZLAWICK, R. S. Análise e Projeto de Sistemas de Informação Orientados a Objetos. Rio de Janeiro: Campus, 2004. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| GAMMA, E. et al. Padrões de projeto: soluções reutilizáveis de software orientado a objetos. Ed. Bookman, 2000. PFLEEGER, S. L. Engenharia de Software. 2. ed. São Paulo: Prentice-Hall, 2004. PRESSMAN, R. S. Engenharia de Software. 6. ed. São Paulo: Bookman Companhia, 2006. RUMBAUGH, J.; BLAHA, M. Modelagem e Projetos Baseados em Objetos. Rio de Janeiro: Campus, 2004. SOMMERVILLE, I. Engenharia de Software. 8. ed. São Paulo: Addison - Wesley, 2008. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX105 | REDES DE COMPUTADORES | 04 | 60 |\n| EMENTA | | | |\n| Fundamentos de transmissão de dados e sistemas de comunicação. Conceitos, classificação e topologias das redes. Modelos de referência: OSI e TCP/IP. Protocolos e serviços de co- municação. Internet e Intranets. Meios de transmissão de dados. Equipamentos de interco- nexão. Avaliação de desempenho. | | | |\n| OBJETIVO | | | |\n| Compreender as redes de computadores, conhecendo o modelo de referência OSI, a arquitetura TCP/IP e seus protocolos, equipamentos e principais aplicações. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| COMER, Douglas E. Redes de computadores e Internet. 4. ed. Porto Alegre: Bookman, 2007. FOROUZAN, Behrouz A. Comunicação de dados e redes de computadores. McGraw- Hill - Artmed, 2008. KUROSE, James F.; ROSS, Keith W. Redes de computadores e a Internet: uma abordagem top-down. 5. ed. São Paulo: Pearson Addison Wesley, 2010. PETERSON, L.; DAVIE, B. Computer networks: a systems approach. 5. ed. Morgan Kaufmann, 2011. TANENBAUM, Andrew S. Redes de computadores. 5. ed. Rio de Janeiro-RJ: Campus, 2011. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| COMER, Douglas E. Interligação de redes com TCP/IP. 5. ed. Rio de Janeiro: Campus, 2006. v. 1. DANTAS, Mário. Tecnologias de redes de comunicação e computadores. Rio de Janeiro: Axcel Books, 2002. HUITEMA, C. Routing in the Internet."]}, {"source_sentence": "Qual o prazo máximo para a entrega da versão final da monografia ou artigo científico após a aprovação do TCC II, conforme o cronograma?", "sentences": ["Sintetizar, deduzir, elaborar hipóteses, estabelecer relações e comparações, detectar contradições, decidir, organizar, expressar-se e argumentar com clareza e coerência utilizando elementos de linguagem matemática. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| CONNALLY, E. et al. Funções para modelar variações: uma preparação para o cálculo. Rio de Janeiro: LTC, 2009. DEMANA, D. F. et al. Pré-Cálculo. São Paulo: Addison Wesley, 2009. DOLCE, O.; POMPEO, J. N. Fundamentos de Matemática Elementar: Geometria Plana. 8. ed. São Paulo: Atual, 2005. 9 v. _______. Fundamentos de Matemática Elementar: Geometria Espacial. 6. ed. São Paulo: Atual, 2005. 10 v. DORING, C. I.; DORING, L. R. Pré-cálculo. Porto Alegre: UFRGS, 2007. IEZZI, G.; MURAKAMI, C. Fundamentos de matemática elementar: Conjuntos, Funções. 8. ed. São Paulo: Atual, 2010. 1 v. IEZZI, G.; DOLCE, O.; MURAKAMI, C. Fundamentos de matemática elementar: Logaritmos. 8. ed. São Paulo: Atual, 2004. 2 v. IEZZI, G. Fundamentos de matemática elementar: Trigonometria. 8. ed. São Paulo: Atual, 2004. 3 v. MEDEIROS, V. Z. et al. Pré-Cálculo. 2. ed. São Paulo: Cengage Learning, 2009. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| ANTON, H. Cálculo. 8. ed. São Paulo: Bookman, 2007. 1 v. BARBOSA, J. L. M. Geometria Euclidiana Plana. Rio de janeiro: SBM, 2000. (Coleção do Professor de Matemática). FLEMMING, D. M.; GONÇALVES, M. B. Cálculo A. 6. ed. São Paulo: Prentice Hall, 2007. LEITHOLD, L. Cálculo com geometria analítica. 3. ed. São Paulo: HARBRA, 1994. 1 v. LIMA, E. L. et al. A Matemática do Ensino Médio. 6. ed. Rio de Janeiro: SBM, 2000. 2 v. (Coleção do Professor de Matemática). _______. A matemática do Ensino Médio. 4. ed. Rio de Janeiro: SBM, 1999. 1 v. (Coleção do Professor de Matemática). STEWART, J. Cálculo. 6. ed. São Paulo: Cengage Learning, 2010. 1 v. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GCH293 | INTRODUÇÃO À FILOSOFIA | 04 | 60 |\n| EMENTA | | | |\n| A natureza e especificidade do discurso filosófico e sua relação com outros campos do conhecimento; principais correntes do pensamento filosófico; Fundamentos filosóficos da Modernidade. Tópicos de Ética e de Epistemologia. | | | |\n| OBJETIVO | | | |\n| Refletir criticamente, através de pressupostos éticos e epistemológicos, acerca da modernidade. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| ABBA, Giuseppe.", "18 | Os critérios e as formas de avaliação do estudante, nos componentes curri- |\n| | |\nculares TCC I e TCC II serão propostos por uma comissão designada pelo coordenador do curso e para homologação do Colegiado de Curso. | Parágrafo único. | Após a homologação, os critérios e as formas de avaliação cons- |\n|---|---|\n| Parágrafo único. | Após a homologação, os critérios e as formas de avaliação cons- |\n| | |\ntarão nos respectivos planos de ensino dos componentes curriculares TCC I e II. | Art. 19 | Após a aprovação do TCC II, conforme o cronograma, o estudante deverá |\n|---|---|\n| Art. 19 | Após a aprovação do TCC II, conforme o cronograma, o estudante deverá |\n| | |\nentregar a versão final da monografia (no formato exigido pela biblioteca da UFFS) em até 30 dias, incorporando as correções e sugestões da banca avaliadora e a Folha de Aprovação assinada pelos membros da banca. | § 1º. | Antes da entrega da versão final da monografia, o aluno deve submetê-la à |\n|---|---|\n| § 1º. | Antes da entrega da versão final da monografia, o aluno deve submetê-la à |\n| | |\napreciação do orientador, o qual deverá verificar se o documento incorpora as sugestões e correções apontadas pela banca avaliadora. | § 2º. | A aprovação | no componente curricular TCC II fica | condicionada a entrega da |\n|---|---|---|---|\n| § 2º. | A aprovação | no componente curricular TCC II fica | condicionada a entrega da |\n| | | | |\nversão final da monografia ou artigo, no padrão e normas preconizados pela biblioteca da UFFS (conforme Mem. 014/2014-PROGRAD), ou conforme publicado, no caso de artigo científico. **DAS DISPOSIÇÕES GERAIS E TRANSITÓRIAS**| Art. 20 | Os casos omissos neste |\n|---|---|\n| Art. 20 | Os casos omissos neste |\n| | |\nserão decididos pelo respectivo Colegiado de Curso. | Art. 21 | Este | Regulamento de Trabalho de Conclusão de Curso | do curso de Ciência |\n|---|---|---|---|\n| Art. 21 | Este | Regulamento de Trabalho de Conclusão de Curso | do curso de Ciência |\n| | | | |\nda Computação entra em vigor após a sua aprovação na CGRAD juntamente ao PPC. **MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA****APÊNDICE III - REGULAMENTO DE APROVEITAMENTO POR****EQUIVALÊNCIA DE COMPONENTE CURRICULAR CURSADO****COM APROVAÇÃO OU VALIDADO NA MATRIZ DE 2010 PARA A****MATRIZ DE 2018.****Art.", "| | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX628 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO VI | 4 | 60 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX629 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO VII | 4 | 60 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX630 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO VIII | 4 | 60 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada."]}, {"source_sentence": "Qual foi a função do professor Dilvo Ilvo Ristoff na Universidade Federal da Fronteira Sul (UFFS) no início de sua implantação?", "sentences": ["Atividades de extensão possibilitam a formação cidadã, aproximando os acadêmicos das discussões que integram o universo profissional e social. **MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA**b) Formação cidadã: o currículo do curso apresentado visa levar o estudante ao permanente questionar-se sobre a realidade social que o cerca e a sua formação enquanto sujeito político pertencente a ela. Esta formação está caracterizada, principalmente, nos componentes curriculares do Domínio Comum. c) Compromisso com o desenvolvimento regional sustentável: os processos de ensino-aprendizagem deverão provocar debate cultural e desenvolvimento regional sustentável nos diferentes espaços de atuação do profissional da Ciência da Computação. d) Garantia de meios de acesso e permanência dos acadêmicos na Universidade: as ações da universidade deverão contribuir para a permanência do estudante na instituição através de bolsas de extensão, iniciação científica e também de permanência. e) Pluralidade: respeito às posições ideo-políticas dos estudantes e compromisso com o rigor acadêmico na apresentação das diferentes correntes para a formação do profissional da Ciência da Computação. f) Autonomia: construção e afirmação permanente da noção de sujeito ético-político, seja pelos discentes ou docentes, na participação institucional, na organização pedagógica e política do curso. g) Gestão participativa, democrática e transparente: a prática dos docentes vinculados à UFFS caracteriza-se pelo compromisso com a construção coletiva de seus cursos, a relação profissional pautada na democracia, com espaços para críticas, diálogos e constantes reavaliações da atuação do grupo. Também a participação efetiva e o estímulo permanente ao envolvimento discente com os desafios e deliberações para a melhoria dos cursos e da universidade. h) Avaliação permanente: a avaliação permanente é realizada de duas formas: (*i*) avaliação externa de responsabilidade do MEC, que atualmente é constituída pelo Exame Nacional de Desempenho dos Estudantes (ENADE) e a Avaliação Externa*in loco*, ambas parte do Sistema Nacional de Avaliação do Ensino Superior (SINAES) e (*ii*) avaliação interna, também denominada de autoavaliação, coordenada pela Comissão Própria de Avaliação (CPA) e por instrumentos próprios que contemplem as especificidades da Universidade, a qual acompanhará a qualidade das atividades desenvolvidas no curso de graduação em Ciência da Computação e o desempenho dos docentes, segundo a avaliação dos estudantes. **MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA*****5.2 Epistemológicos***Comprometido com o processo de concepção e construção da Instituição, o curso de Ciência da Computação inclui no desenvolvimento do ensino, pesquisa e extensão, o princípio democrático de atuação. Este princípio busca “respeitar a pluralidade de pensamento e a diversidade cultural” como elo entre conhecimento técnico-científico e conhecimento social. Assim, as bases epistemológicas do curso são fundamentadas em práticas que visam à constante aquisição do conhecimento e à formação de um profissional capaz de gerar desenvolvimento e atender às demandas sociais. Para isso, a constante indagação, o uso de referências apropriadas, o exercício da avaliação crítica são ações essenciais para a formação acadêmica aliada ao desenvolvimento social.", "A promulgação da lei fez intensificar as atividades de estruturação da nova universidade, já que havia a meta de iniciar as atividades letivas no primeiro semestre de 2010. Em 21 de setembro de 2009, o Ministro da Educação designou o professor Dilvo Ilvo Ristoff para o cargo de reitor*pro tempore*da UFFS, com a incumbência de coordenar os trabalhos para a implantação da nova universidade, sob a tutoria da Universidade Federal de Santa Catarina (UFSC). Ainda em 2009 foram realizados os primeiros concursos e posses de servidores, estruturados os projetos pedagógicos provisórios dos cursos a serem implantados, definido o processo seletivo para o ingresso dos primeiros acadêmicos, estabelecidos os locais provisórios de funcionamento e constituída parte da equipe dirigente que coordenaria os primeiros trabalhos na implantação da UFFS. No dia 29 de março de 2010 foram iniciadas as aulas nos cinco*Campi*da UFFS, com o ingresso de 2.160 acadêmicos selecionados com base nas notas do Enem/2009 e com a aplicação da bonificação para os que cursaram o ensino médio em escola pública. Em cada campus foi realizada programação de recepção aos acadêmicos com o envolvimento da comunidade interna e externa, visando marcar o primeiro dia de aula na Universidade. Em um diagnóstico sobre os acadêmicos que ingressaram na UFFS neste primeiro processo seletivo constatou-se que mais de 90% deles eram oriundos da Escola Pública de Ensino Médio e que mais de 60% deles representavam a primeira geração das famílias a acessar o ensino superior. **MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA**O início das aulas também ensejou o primeiro contato mais direto dos acadêmicos e dos docentes com os projetos pedagógicos dos cursos que haviam sido elaborados pela comissão de implantação da Universidade com base em três grandes eixos: Domínio Comum, Domínio Conexo e Domínio Específico. Os primeiros contatos foram evidenciando a necessidade de repensar os PPCs, tarefa que se realizou ao longo dos anos de 2010 e 2011, sob a coordenação dos respectivos colegiados de curso a fim de serem submetidos à Câmara de Graduação do Conselho Universitário para aprovação definitiva. Nesta revisão consolidou-se uma concepção de currículo assentada em um corpo de conhecimentos organizado em três domínios: Comum, Conexo e Específico, expressos na matriz dos cursos, em componentes curriculares e outras modalidades de organização do conhecimento. O Domínio Comum visa proporcionar uma formação crítico-social e introduzir o acadêmico no ambiente universitário. O Domínio Conexo situa-se na interface entre as áreas de conhecimento, objetivando a formação e o diálogo interdisciplinar entre diferentes cursos, em cada*campus*. O Domínio Específico preocupa-se com uma sólida formação profissional. Compreende-se que os respectivos domínios são princípios articuladores entre o ensino, a pesquisa e a extensão, fundantes do projeto pedagógico institucional. A organização dos*campi*, com a constituição de suas equipes dirigentes, a definição dos coordenadores de curso e a estruturação dos setores essenciais para garantir a funcionalidade do projeto da Universidade foi um desafio encarado ao longo do primeiro ano de funcionamento. Iniciava-se aí a trajetória em busca da constituição de uma identidade e de uma cultura institucional.", "São Paulo: Companhia das Letras, 2007. | | | | |\n| | GREMAUD, Amaury; VASCONCELLOS, Marco Antonio; JÚNIOR TONETO, Rudinei. Economia brasileira contemporânea. 4. ed. São Paulo: Atlas, 2002. | | | | |\n| | HUBERMAN, L. História da riqueza do homem. 21. ed. Rio de Janeiro: LTC, 1986. | | | | |\n| | IANNI, O. Estado e capitalismo. 2. ed. rev. e ampl. São Paulo: Brasiliense, 1989. | | | | |\n| | LEFF, Enrique. Epistemologia ambiental. 2. ed. São Paulo: Cortez, 2002. | | | | |\n| | LÖWY, Michael. Eco-socialismo e planificação democrática. Crítica Marxista, São Paulo,\nUNESP, n. 29, 2009. | |\n|---|---|---|\n| | LÖWY, Michael. Eco-socialismo e planificação democrática. Crítica Marxista, São Paulo, UNESP, n. 29, 2009. | |\n| | MARX, Karl. O capital: crítica da economia política. 14. ed. Rio de Janeiro: Bertrand Brasil, 1994. | |\n| | NAPOLEONI, Claúdio. Smith, Ricardo e Marx. Rio de Janeiro. 4. ed. Rio de Janeiro: Graal, 1978. | |\n| | PUTNAM, Robert D. Comunidade e democracia, a experiência da Itália moderna. 4. ed. Rio de Janeiro: FGV, 2005. | |\n| | SEN, Amartia. Desenvolvimento como Liberdade. São Paulo: Companhia das Letras, 2000. | |\n| | SMITH, Adam. Riqueza das nações: Uma investigação sobre a natureza e causas da riqueza das nações. Curitiba: Hermes, 2001. | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| | OPTATIVA I | 04 | 60 |\n| EMENTA | | | |\n| Deve ser preenchida com o conteúdo de alguma das diversas disciplinas opcionais propostas, segundo critérios predefinidos pelo Colegiado. | | | |\n| OBJETIVO | | | |\n| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| | OPTATIVA II | 04 | 60 |\n| EMENTA | | | |\n| Deve ser preenchida com o conteúdo de alguma das diversas disciplinas opcionais propostas, segundo critérios predefinidos pelo Colegiado. | | | |\n| OBJETIVO | | | |\n| Contemplada no plano de ensino da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano de ensino da disciplina a ser ministrada."]}, {"source_sentence": "Qual é a capacidade de armazenamento padrão do disco rígido dos computadores utilizados no Laboratório de Organização de Computadores e Sistemas Digitais e qual é a velocidade de rotação mínima do disco rígido?", "sentences": ["A pesquisa será fomentada através dos grupos de pesquisa do curso em projetos de iniciação científica. O trabalho de conclusão de curso é outro componente importante para o desenvolvimento da pesquisa pelo estudante, pois é composto por componentes curriculares obrigatórios que o conduzem a planejar e executar um projeto de caráter científico ou tecnológico. Este Projeto Pedagógico apresenta uma matriz curricular em conformidade com as diretrizes que orientam a estrutura dos cursos da área da Computação, com a demanda atual por profissionais e com as expectativas regionais constituídas à luz dos movimentos sociais e dos fundamentos que regem a construção da UFFS. **8.1 Matriz Curricular**O Curso Ciência da Computação possui duas entradas anuais oferecidas em turnos diferentes. A primeira entrada, oferecida no primeiro semestre de cada ano, é no turno vespertino, e a segunda entrada, oferecida no segundo semestre de cada ano, é no turno noturno. Ambas as ofertas possuem carga horária, componentes curriculares, ementários e referências iguais, diferindo apenas no tempo de integralização curricular. A duração para o turno vespertino é de 8 (oito) semestres e para o turno noturno, 10 (dez) semestres. Para o turno noturno, destaca-se a distribuição dos componentes curriculares ao longo das duas últimas fases (nona e décima fases): considerando-se a carga de trabalho exigida pelos dois trabalhos de conclusão de curso, alocou-se um número reduzido de componentes curriculares regulares, possibilitando ao aluno trabalhador dedicar-se aos trabalhos de conclusão durante as duas últimas fases. Adicionalmente, salienta-se o fato de que os componentes curriculares estarão dispostos em fase, destacando-se que sempre haverá cinco optativas sendo ofertados, facilitando sua alocação e ampliando as possibilidades de cumprimento da carga horária optativa. **8.1.1 Matriz curricular do turno vespertino**| Fase | N.\nOrdem | Código | COMPONENTE CURRICULAR | Créditos | Horas | Pré-requisitos |\n|---|---|---|---|---|---|---|\n| Fase | N.", "Projeto Pedagógico do Curso de Graduação em Ciência da Computação – Bacharelado **MINISTÉRIO DA EDUCAÇÃO**\n**UNIVERSIDADE FEDERAL DA FRONTEIRA SUL – UFFS**\n**PRÓ-REITORIA DE GRADUAÇÃO**\n**DIRETORIA DE PEDAGÓGICA****14.4 Laboratórios****14.4.2 Laboratórios de Organização de Computadores e Sistemas Digitais**O objetivo principal do Laboratório de Organização de Computadores e Sistemas Digitais é dar suporte às atividades teóricas e práticas das disciplinas de circuitos digitais, sistemas digitais, organização de computadores, arquitetura de computadores, linguagens de descrição de hardware (optativa) e tópicos em automação industrial (optativa). Este laboratório deve permitir que o aluno do curso de Ciência da Computação possa desenvolver atividades práticas (experimentos e projetos) envolvendo sistemas digitais, tais como: implementar circuitos combinacionais, implementar circuitos sequenciais, realizar simulações via software de circuitos combinacionais e sequenciais, realizar prototipagem de sistemas digitais em PLDs e FPGAs, além da implementação de circuitos inteligentes através da programação de microcontroladores. Na tabela abaixo são descritos os materiais necessários à implantação e funcionamento do laboratório. | Tabela 14: Descrição dos materiais do laboratório: “Laboratório de Organização de Computadores e Sistemas Digitais” | | |\n|---|---|---|\n| Laboratório de Organização de Computadores e Sistemas Digitais | | |\n| Professor responsável: Luciano Lores Caimi | | |\n| Alunos por turma: 25 | | |\n| Área: 80 m2 | | Localização: Campus Chapecó |\n| Qtd | Descrição | |\n| 19 | Bancada com 1 metro por 2 metros com tomadas (8 tomadas) | |\n| 30 | Cadeira Giratória com rodas | |\n| 3 | Armário (2,0 x 0,8 x 1,80) (L x P x A) | |\n| 1 | Quadro branco | |\n| 30 | Computadores com processador quad-core frequência de 2.2 GHz e FSB 800 MHz ou superior 4 Mb RAM (memória tipo DIMM SDRAM DDR2 ou superior) Disco rígido padrão Serial ATA II com capacidade de armazenamento de 400 Gbytes (ou superior) e rotação 7200 rpm (ou superior) Unidade óptica do tipo DVD-RW Monitor de vídeo LCD com tela de 19 polegadas, tela do tipo Matriz | |\n| | Ativa – TFT LCD, interface digital, dot pitch de 0.264 mm e resolução\ngráfica de 1280 x 1024 à 75 Hz\nTeclado padrão ABNT-2 com interface USB\nMouse óptico com 3 botões, rolagem e interface USB\nInterface de rede Gigabit Ethernet padrão IEEE 802.3x compatível com as\nvelocidades 10/100/1000, para conexão de cabo de par trançado, com\nconectores RJ-45.\nInterface para vídeo com alocação dinâmica de memória, que suporte a\nresolução de 1900 x 1200 com taxa de atualização de 85 Hz e interface\ndigital.\n02 (duas) interfaces para teclado e mouse padrão USB.\n01 (uma) interface serial integrada na placa mãe, com conector de 9 pinos\npadrão.\n04 (quatro) interfaces USB (além das interfaces para mouse e teclado).\nGabinete tool less |\n|---|---|\n| | Ativa – TFT LCD, interface digital, dot pitch de 0.264 mm e resolução gráfica de 1280 x 1024 à 75 Hz Teclado padrão ABNT-2 com interface USB Mouse óptico com 3 botões, rolagem e interface USB Interface de rede Gigabit Ethernet padrão IEEE 802.3x compatível com as velocidades 10/100/1000, para conexão de cabo de par trançado, com conectores RJ-45.", "| | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX649 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXVII | 2 | 30 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX650 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXVIII | 2 | 30 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| REFERÊNCIAS COMPLEMENTARES | | | |\n| Contemplada no plano da disciplina a ser ministrada. | | | |\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n|---|---|---|---|\n| Código | COMPONENTE CURRICULAR | Créditos | Horas |\n| GEX651 | TÓPICOS ESPECIAIS EM COMPUTAÇÃO XXIX | 2 | 30 |\n| EMENTA | | | |\n| Conteúdo variável respeitando os objetivos mencionados. | | | |\n| OBJETIVO | | | |\n| Viabilizar a introdução no curso de temas não abordados em outros componentes. Deve ser relevante para o momento, traduzindo a evolução das tecnologias e da ciência da computação e/ou aproveitando as experiências de professores ou outros profissionais qualificados. Esta disciplina também serve para atualizações da matriz curricular. | | | |\n| REFERÊNCIAS BÁSICAS | | | |\n| Contemplada no plano da disciplina a ser ministrada."]}], "model-index": [{"name": "SentenceTransformer based on BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5895061728395061, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7839506172839507, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8518518518518519, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9135802469135802, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5895061728395061, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2613168724279835, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17037037037037034, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09135802469135801, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5895061728395061, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7839506172839507, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8518518518518519, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9135802469135802, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7532477341933085, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7018285812267293, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7071249444125691, "name": "Cosine Map@100"}, {"type": "dot_accuracy@1", "value": 0.5895061728395061, "name": "Dot Accuracy@1"}, {"type": "dot_accuracy@3", "value": 0.7839506172839507, "name": "Dot Accuracy@3"}, {"type": "dot_accuracy@5", "value": 0.8518518518518519, "name": "Dot Accuracy@5"}, {"type": "dot_accuracy@10", "value": 0.9135802469135802, "name": "Dot Accuracy@10"}, {"type": "dot_precision@1", "value": 0.5895061728395061, "name": "Dot Precision@1"}, {"type": "dot_precision@3", "value": 0.2613168724279835, "name": "Dot Precision@3"}, {"type": "dot_precision@5", "value": 0.17037037037037034, "name": "Dot Precision@5"}, {"type": "dot_precision@10", "value": 0.09135802469135801, "name": "Dot Precision@10"}, {"type": "dot_recall@1", "value": 0.5895061728395061, "name": "Dot Recall@1"}, {"type": "dot_recall@3", "value": 0.7839506172839507, "name": "Dot Recall@3"}, {"type": "dot_recall@5", "value": 0.8518518518518519, "name": "Dot Recall@5"}, {"type": "dot_recall@10", "value": 0.9135802469135802, "name": "Dot Recall@10"}, {"type": "dot_ndcg@10", "value": 0.7532477341933085, "name": "Dot Ndcg@10"}, {"type": "dot_mrr@10", "value": 0.7018285812267293, "name": "Dot Mrr@10"}, {"type": "dot_map@100", "value": 0.7071249444125691, "name": "Dot Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,319 |
Seungjun/articleGeneratorV1.0
|
Seungjun
|
text2text-generation
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-10T09:02:51Z |
2023-04-10T10:19:56+00:00
| 10 | 1 |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: articleGeneratorV1.0
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# What does model do and how to use it
Just provide an title to the model and it will generate a whole article about it.
```python
# Install transformers library
!pip install transformers
```
```python
# Load tokenizer and model
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
model_name = "Seungjun/articleGeneratorV1.0"
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFAutoModelForSeq2SeqLM.from_pretrained(model_name)
```
```python
# Get the article for a given title
from transformers import pipeline
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="tf")
summarizer(
"Steve Jobs", # title
min_length=500,
max_length=1024,
)
```
Result:
# Current limitation of the model
It generate aot of lies. 99% of the word generated by this model is not true.
# articleGeneratorV1.0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.9568
- Validation Loss: 3.6096
- Train Rougel: tf.Tensor(0.08172019, shape=(), dtype=float32)
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rougel | Epoch |
|:----------:|:---------------:|:-----------------------------------------------:|:-----:|
| 4.9218 | 4.0315 | tf.Tensor(0.08038119, shape=(), dtype=float32) | 0 |
| 4.2887 | 3.8366 | tf.Tensor(0.08103053, shape=(), dtype=float32) | 1 |
| 4.1269 | 3.7328 | tf.Tensor(0.081041485, shape=(), dtype=float32) | 2 |
| 4.0276 | 3.6614 | tf.Tensor(0.081364945, shape=(), dtype=float32) | 3 |
| 3.9568 | 3.6096 | tf.Tensor(0.08172019, shape=(), dtype=float32) | 4 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# What does model do and how to use it
Just provide an title to the model and it will generate a whole article about it.
```python
# Install transformers library
!pip install transformers
```
```python
# Load tokenizer and model
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
model_name = "Seungjun/articleGeneratorV1.0"
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFAutoModelForSeq2SeqLM.from_pretrained(model_name)
```
```python
# Get the article for a given title
from transformers import pipeline
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="tf")
summarizer(
"Steve Jobs", # title
min_length=500,
max_length=1024,
)
```
Result:
# Current limitation of the model
It generate aot of lies. 99% of the word generated by this model is not true.
# articleGeneratorV1.0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.9568
- Validation Loss: 3.6096
- Train Rougel: tf.Tensor(0.08172019, shape=(), dtype=float32)
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rougel | Epoch |
|:----------:|:---------------:|:-----------------------------------------------:|:-----:|
| 4.9218 | 4.0315 | tf.Tensor(0.08038119, shape=(), dtype=float32) | 0 |
| 4.2887 | 3.8366 | tf.Tensor(0.08103053, shape=(), dtype=float32) | 1 |
| 4.1269 | 3.7328 | tf.Tensor(0.081041485, shape=(), dtype=float32) | 2 |
| 4.0276 | 3.6614 | tf.Tensor(0.081364945, shape=(), dtype=float32) | 3 |
| 3.9568 | 3.6096 | tf.Tensor(0.08172019, shape=(), dtype=float32) | 4 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "articleGeneratorV1.0", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,320 |
LoneStriker/airoboros-l2-70b-3.1.2-5.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:jondurbin/airoboros-3.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-10-21T15:30:52Z |
2023-10-22T14:39:18+00:00
| 16 | 2 |
---
datasets:
- jondurbin/airoboros-3.1
license: llama2
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
| null |
Non_BioNLP
|
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
{"datasets": ["jondurbin/airoboros-3.1"], "license": "llama2"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,322 |
Shularp/krirk-finetuned-google_mt5-small
|
Shularp
|
translation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-13T05:39:53Z |
2022-12-13T06:18:57+00:00
| 111 | 0 |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: krirk-finetuned-google_mt5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# krirk-finetuned-google_mt5-small
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# krirk-finetuned-google_mt5-small
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
{"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "krirk-finetuned-google_mt5-small", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,323 |
Kiwihead15/marian-finetuned-kde4-en-to-fr
|
Kiwihead15
|
translation
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-07T10:55:35Z |
2023-10-11T17:49:03+00:00
| 5 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,324 |
semindan/xnli_xlm_r_only_zh
|
semindan
|
text-classification
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:xnli",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-17T09:28:48Z |
2023-01-07T14:26:26+00:00
| 12 | 0 |
---
datasets:
- xnli
license: mit
metrics:
- accuracy
tags:
- text-classification
- generated_from_trainer
model-index:
- name: xnli_xlm_r_only_zh
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: xnli
type: xnli
config: zh
split: train
args: zh
metrics:
- type: accuracy
value: 0.770281124497992
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xnli_xlm_r_only_zh
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xnli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7534
- Accuracy: 0.7703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6993 | 1.0 | 3068 | 0.6278 | 0.7378 |
| 0.5667 | 2.0 | 6136 | 0.5791 | 0.7763 |
| 0.5042 | 3.0 | 9204 | 0.5830 | 0.7763 |
| 0.451 | 4.0 | 12272 | 0.5752 | 0.7755 |
| 0.403 | 5.0 | 15340 | 0.5771 | 0.7839 |
| 0.3594 | 6.0 | 18408 | 0.6354 | 0.7739 |
| 0.3203 | 7.0 | 21476 | 0.6506 | 0.7699 |
| 0.2883 | 8.0 | 24544 | 0.6894 | 0.7719 |
| 0.2609 | 9.0 | 27612 | 0.7366 | 0.7671 |
| 0.2413 | 10.0 | 30680 | 0.7534 | 0.7703 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xnli_xlm_r_only_zh
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xnli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7534
- Accuracy: 0.7703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6993 | 1.0 | 3068 | 0.6278 | 0.7378 |
| 0.5667 | 2.0 | 6136 | 0.5791 | 0.7763 |
| 0.5042 | 3.0 | 9204 | 0.5830 | 0.7763 |
| 0.451 | 4.0 | 12272 | 0.5752 | 0.7755 |
| 0.403 | 5.0 | 15340 | 0.5771 | 0.7839 |
| 0.3594 | 6.0 | 18408 | 0.6354 | 0.7739 |
| 0.3203 | 7.0 | 21476 | 0.6506 | 0.7699 |
| 0.2883 | 8.0 | 24544 | 0.6894 | 0.7719 |
| 0.2609 | 9.0 | 27612 | 0.7366 | 0.7671 |
| 0.2413 | 10.0 | 30680 | 0.7534 | 0.7703 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
{"datasets": ["xnli"], "license": "mit", "metrics": ["accuracy"], "tags": ["text-classification", "generated_from_trainer"], "model-index": [{"name": "xnli_xlm_r_only_zh", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "xnli", "type": "xnli", "config": "zh", "split": "train", "args": "zh"}, "metrics": [{"type": "accuracy", "value": 0.770281124497992, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,325 |
mini1013/master_cate_fd7
|
mini1013
|
text-classification
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | 2024-11-27T11:00:54Z |
2024-11-27T11:01:15+00:00
| 615 | 0 |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 앙카라 스파게티면 5kg 지윤 주식회사
- text: 삼양 까르보불닭볶음면 130gx4봉-1팩/멀티라면 오뚜기_멀티 스낵면 108gx5봉-1팩 단비마켓
- text: '[오뚜기] 진라면 순한맛 5개 외 라면 멀티팩 인기상품 골라담기 (구매금액별 추가 할인) 삼양_삼양라면 5개 유진통상'
- text: 풀무원 바릴라 아티장 파스타면(130g) 20봉 (통밀스파게티/스파게티/링귀니/푸실리) 통밀스파게티면X20봉 메가글로벌001
- text: 농심 짜파게티 5개입 멀티팩(유통기한 24년 8월 1일 이후) 농심 신라면 건면 5입 멀티팩 세계로마트
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9463862379186676
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.0 | <ul><li>'동원 면발의신 얼큰칼국수268g x 4 코스트코 650449 상품 상세페이지 참조_268g x 4 탑럭셔리3'</li><li>'[청정원] 두부로만든 콩담백면 택1(옵션선택) 1.비빔 380g(2인) 아센드라도'</li><li>'라리 펜네 500g (유) 싱싱채소 그린팜'</li></ul> |
| 0.0 | <ul><li>'팔도 비빔면딸기 135g (5개입) x 1팩 주식회사 디씽컴퍼니'</li><li>'오뚜기 열 라면 120g 5개 서신빠마켓'</li><li>'농심 오징어짬뽕컵 67g x 6개 으쓱몰'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9464 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_fd7")
# Run inference
preds = model("앙카라 스파게티면 5kg 지윤 주식회사")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 8.94 | 20 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0625 | 1 | 0.394 | - |
| 3.125 | 50 | 0.0229 | - |
| 6.25 | 100 | 0.0002 | - |
| 9.375 | 150 | 0.0 | - |
| 12.5 | 200 | 0.0 | - |
| 15.625 | 250 | 0.0 | - |
| 18.75 | 300 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.0 | <ul><li>'동원 면발의신 얼큰칼국수268g x 4 코스트코 650449 상품 상세페이지 참조_268g x 4 탑럭셔리3'</li><li>'[청정원] 두부로만든 콩담백면 택1(옵션선택) 1.비빔 380g(2인) 아센드라도'</li><li>'라리 펜네 500g (유) 싱싱채소 그린팜'</li></ul> |
| 0.0 | <ul><li>'팔도 비빔면딸기 135g (5개입) x 1팩 주식회사 디씽컴퍼니'</li><li>'오뚜기 열 라면 120g 5개 서신빠마켓'</li><li>'농심 오징어짬뽕컵 67g x 6개 으쓱몰'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9464 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_fd7")
# Run inference
preds = model("앙카라 스파게티면 5kg 지윤 주식회사")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 8.94 | 20 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0625 | 1 | 0.394 | - |
| 3.125 | 50 | 0.0229 | - |
| 6.25 | 100 | 0.0002 | - |
| 9.375 | 150 | 0.0 | - |
| 12.5 | 200 | 0.0 | - |
| 15.625 | 250 | 0.0 | - |
| 18.75 | 300 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "mini1013/master_domain", "library_name": "setfit", "metrics": ["metric"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "앙카라 스파게티면 5kg 지윤 주식회사"}, {"text": "삼양 까르보불닭볶음면 130gx4봉-1팩/멀티라면 오뚜기_멀티 스낵면 108gx5봉-1팩 단비마켓"}, {"text": "[오뚜기] 진라면 순한맛 5개 외 라면 멀티팩 인기상품 골라담기 (구매금액별 추가 할인) 삼양_삼양라면 5개 유진통상"}, {"text": "풀무원 바릴라 아티장 파스타면(130g) 20봉 (통밀스파게티/스파게티/링귀니/푸실리) 통밀스파게티면X20봉 메가글로벌001"}, {"text": "농심 짜파게티 5개입 멀티팩(유통기한 24년 8월 1일 이후) 농심 신라면 건면 5입 멀티팩 세계로마트"}], "inference": true, "model-index": [{"name": "SetFit with mini1013/master_domain", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "metric", "value": 0.9463862379186676, "name": "Metric"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,326 |
JordiAb/BERT_NED
|
JordiAb
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-03-01T13:39:36Z |
2024-03-01T15:02:36+00:00
| 15 | 0 |
---
language:
- en
pipeline_tag: text-classification
widget:
- text: 'Is ''AMLO'' in the context of: ''Andres Manuel Lopez Obrador defends Mexico''s
military after report on missing students case'', referring to [SEP] ''Andrés
Manuel López Obrador, President of Mexico since 2018''?'
- text: 'Is ''AMLO'' in the context of: ''Andres Manuel Lopez Obrador defends Mexico''s
military after report on missing students case'', referring to [SEP] ''Andrés
Manuel López Obrador contenderá por la presidencia de México en 2012, Wikinews
article''?'
---
# Named Entity Dissambiguation with a BERT model
This is a very task specific model. A BERT model was trained to dissambiguate WikiData options for entities mentioned in news articles.
Given an entity mentioned in a news article and different WikiData options of possible entities, and given the context of which the entity is being mentioned, which of the entity options is the context referring to?
Suppose you've got a news article that mentions Donald Trump. Donald Trump is mentioned in a sentence:
```python
entity = "Donald Trump"
sentence_mention = "It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus, after participating in a meeting in Florida where the US president was..."
```
If one performs a query to WikiData by searching a specific entity, you may get several options for a single entity, in example for Donald Trump:
```python
options = [
'Donald Trump, president of the United States from 2017 to 2021',
'Donald Trump, American physician',
'Donald Trump, Wikimedia disambiguation page',
'Donald Trump, song by Mac Miller',
'Donald Trump, segment of an episode of Last Week Tonight',
"Donald Trump, character Donald Trump in Anthony Davis's opera The Central Park Five",
'2016 United States presidential election, 58th quadrennial U.S. presidential election'
]
```
This model is trained to give a score to the following query string:
`f"Is '{entity}' in the context of: '{sentence_mention}', referring to [SEP] {option}?"`
**Make sure that query string doesn't exceed 512 tokens,** if it does, it is advisable to reduce the context of the entity mention in order avoid truncation of the query string.
To dissambiguate, one must compute scores for all options of a single entity that is mentioned in a context, and then get the option with the maximum score.
```python
qry_strings = [
`f"Is '{entity}' in the context of: '{sentence_mention}', referring to [SEP] {option}?"` for option in options
]
```
options scores for the above example:
- Donald Trump, president of the United States from 2017 to 2021: 0.9990746974945068
- Donald Trump, American physician: 0.00032277879654429853
- Donald Trump, Wikimedia disambiguation page: 0.00044132230686955154
- Donald Trump, song by Mac Miller: 0.0003152454155497253
- Donald Trump, segment of an episode of Last Week Tonight: 0.00031540714553557336
- Donald Trump, character Donald Trump in Anthony Davis's opera The Central Park Five: 0.00030414783395826817
- 2016 United States presidential election, 58th quadrennial U.S. presidential election: 0.0005287989042699337
## Using the Model
To compute the score of a single query string:
```python
import torch
from transformers import BertTokenizer, BertForSequenceClassification
entity = "Donald Trump"
sentence_mention = "It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus, after participating in a meeting in Florida where the US president was..."
options = [
'Donald Trump, president of the United States from 2017 to 2021',
'Donald Trump, American physician',
'Donald Trump, Wikimedia disambiguation page',
'Donald Trump, song by Mac Miller',
'Donald Trump, segment of an episode of Last Week Tonight',
"Donald Trump, character Donald Trump in Anthony Davis's opera The Central Park Five",
'2016 United States presidential election, 58th quadrennial U.S. presidential election'
]
option = options[0]
# predictions will be made on the gpu if there is a gpu available
device = torch.device(
"cuda" if torch.cuda.is_available() else "cpu"
)
# load the BERT NED model
model = BertForSequenceClassification.from_pretrained(
'JordiAb/BERT_NED'
).eval().to(device)
# load the BERT NED tokenizer
tokenizer = BertTokenizer.from_pretrained(
'JordiAb/BERT_NED'
)
# build the query string required by our BERT model. Namely:
query = f"Is '{entity}' in the context of: '{sentence_mention}', referring to [SEP] {option}?"
# encode and tokenize the query string
encoded_dict = tokenizer.encode_plus(
query, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 512, # Pad & truncate all sentences.
padding='max_length', # Make sure this applies padding as needed
truncation=True,
return_attention_mask = True, # Construct attention masks.
return_tensors = 'pt', # Return pytorch tensors.
)
# move input ids to GPU (if available)
input_ids=encoded_dict['input_ids'].to(device)
# move attention mask to GPU (if available)
attention_mask=encoded_dict['attention_mask'].to(device)
with torch.no_grad(): # avoid gradient computation to save memory
# forward pass of the model
outputs = model(
input_ids=input_ids,
token_type_ids=None,
attention_mask=attention_mask
)
# get logits of prediction
logits = outputs.logits
# Use softmax to get probabilities
probabilities = torch.nn.functional.softmax(logits, dim=1)
# is meant for one observation so return probabilities[0], move the resulting tensor to cpu and return it as numpy array
probabilities=probabilities[0].cpu().numpy()
```
`probabilities` is a numpy array containing the two probabilities. Probability of belongin to class 0 and probability of belonging to class 1.
`n np.array([prob0, prob1])`
In this case we are interested in the probability of belonging to class 1, since class 1 is the positive label as the YES answer to the query string `"Is '{entity}' in the context of: '{sentence_mention}', referring to [SEP] {option}?"`
## About the DataSet used for this training:
The dataset consists of news articles obtained from a Mexican newspaper, processed using Named Entity Recognition (NER) to identify entities within each article. Queries were made to WikiData for each identified entity in order to gather all potential matches of an entity. The `StableBeluga-7B` Language Model (LLM) assisted in disambiguating selected entities from the dataset, with its outputs serving as labels for training.
This project approaches the task as a binary classification problem. The training data includes entities from the articles, relevant sentences (context) where the entity is being mentioned and all WikiData options. Each entity-context-option triplet was paired with a binary label (1/0) to form a single training observation. The dataset construction process aimed to fine-tune the model. To ensure compatibility with model limitations, inputs were truncated to fit within a 512-token maximum.
For example, with the above Donald Trump example, the Data Set would look like:
```python
[
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, president of the United States from 2017 to 2021?",
"label": 1,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, American physician?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, Wikimedia disambiguation page?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, song by Mac Miller?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, segment of an episode of Last Week Tonight?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, character Donald Trump in Anthony Davis's opera The Central Park Five?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] 2016 United States presidential election, 58th quadrennial U.S. presidential election?",
"label": 0,
}
]
```
Repo of the project:
https://github.com/Jordi-Ab/BERT_NED
| null |
Non_BioNLP
|
# Named Entity Dissambiguation with a BERT model
This is a very task specific model. A BERT model was trained to dissambiguate WikiData options for entities mentioned in news articles.
Given an entity mentioned in a news article and different WikiData options of possible entities, and given the context of which the entity is being mentioned, which of the entity options is the context referring to?
Suppose you've got a news article that mentions Donald Trump. Donald Trump is mentioned in a sentence:
```python
entity = "Donald Trump"
sentence_mention = "It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus, after participating in a meeting in Florida where the US president was..."
```
If one performs a query to WikiData by searching a specific entity, you may get several options for a single entity, in example for Donald Trump:
```python
options = [
'Donald Trump, president of the United States from 2017 to 2021',
'Donald Trump, American physician',
'Donald Trump, Wikimedia disambiguation page',
'Donald Trump, song by Mac Miller',
'Donald Trump, segment of an episode of Last Week Tonight',
"Donald Trump, character Donald Trump in Anthony Davis's opera The Central Park Five",
'2016 United States presidential election, 58th quadrennial U.S. presidential election'
]
```
This model is trained to give a score to the following query string:
`f"Is '{entity}' in the context of: '{sentence_mention}', referring to [SEP] {option}?"`
**Make sure that query string doesn't exceed 512 tokens,** if it does, it is advisable to reduce the context of the entity mention in order avoid truncation of the query string.
To dissambiguate, one must compute scores for all options of a single entity that is mentioned in a context, and then get the option with the maximum score.
```python
qry_strings = [
`f"Is '{entity}' in the context of: '{sentence_mention}', referring to [SEP] {option}?"` for option in options
]
```
options scores for the above example:
- Donald Trump, president of the United States from 2017 to 2021: 0.9990746974945068
- Donald Trump, American physician: 0.00032277879654429853
- Donald Trump, Wikimedia disambiguation page: 0.00044132230686955154
- Donald Trump, song by Mac Miller: 0.0003152454155497253
- Donald Trump, segment of an episode of Last Week Tonight: 0.00031540714553557336
- Donald Trump, character Donald Trump in Anthony Davis's opera The Central Park Five: 0.00030414783395826817
- 2016 United States presidential election, 58th quadrennial U.S. presidential election: 0.0005287989042699337
## Using the Model
To compute the score of a single query string:
```python
import torch
from transformers import BertTokenizer, BertForSequenceClassification
entity = "Donald Trump"
sentence_mention = "It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus, after participating in a meeting in Florida where the US president was..."
options = [
'Donald Trump, president of the United States from 2017 to 2021',
'Donald Trump, American physician',
'Donald Trump, Wikimedia disambiguation page',
'Donald Trump, song by Mac Miller',
'Donald Trump, segment of an episode of Last Week Tonight',
"Donald Trump, character Donald Trump in Anthony Davis's opera The Central Park Five",
'2016 United States presidential election, 58th quadrennial U.S. presidential election'
]
option = options[0]
# predictions will be made on the gpu if there is a gpu available
device = torch.device(
"cuda" if torch.cuda.is_available() else "cpu"
)
# load the BERT NED model
model = BertForSequenceClassification.from_pretrained(
'JordiAb/BERT_NED'
).eval().to(device)
# load the BERT NED tokenizer
tokenizer = BertTokenizer.from_pretrained(
'JordiAb/BERT_NED'
)
# build the query string required by our BERT model. Namely:
query = f"Is '{entity}' in the context of: '{sentence_mention}', referring to [SEP] {option}?"
# encode and tokenize the query string
encoded_dict = tokenizer.encode_plus(
query, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 512, # Pad & truncate all sentences.
padding='max_length', # Make sure this applies padding as needed
truncation=True,
return_attention_mask = True, # Construct attention masks.
return_tensors = 'pt', # Return pytorch tensors.
)
# move input ids to GPU (if available)
input_ids=encoded_dict['input_ids'].to(device)
# move attention mask to GPU (if available)
attention_mask=encoded_dict['attention_mask'].to(device)
with torch.no_grad(): # avoid gradient computation to save memory
# forward pass of the model
outputs = model(
input_ids=input_ids,
token_type_ids=None,
attention_mask=attention_mask
)
# get logits of prediction
logits = outputs.logits
# Use softmax to get probabilities
probabilities = torch.nn.functional.softmax(logits, dim=1)
# is meant for one observation so return probabilities[0], move the resulting tensor to cpu and return it as numpy array
probabilities=probabilities[0].cpu().numpy()
```
`probabilities` is a numpy array containing the two probabilities. Probability of belongin to class 0 and probability of belonging to class 1.
`n np.array([prob0, prob1])`
In this case we are interested in the probability of belonging to class 1, since class 1 is the positive label as the YES answer to the query string `"Is '{entity}' in the context of: '{sentence_mention}', referring to [SEP] {option}?"`
## About the DataSet used for this training:
The dataset consists of news articles obtained from a Mexican newspaper, processed using Named Entity Recognition (NER) to identify entities within each article. Queries were made to WikiData for each identified entity in order to gather all potential matches of an entity. The `StableBeluga-7B` Language Model (LLM) assisted in disambiguating selected entities from the dataset, with its outputs serving as labels for training.
This project approaches the task as a binary classification problem. The training data includes entities from the articles, relevant sentences (context) where the entity is being mentioned and all WikiData options. Each entity-context-option triplet was paired with a binary label (1/0) to form a single training observation. The dataset construction process aimed to fine-tune the model. To ensure compatibility with model limitations, inputs were truncated to fit within a 512-token maximum.
For example, with the above Donald Trump example, the Data Set would look like:
```python
[
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, president of the United States from 2017 to 2021?",
"label": 1,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, American physician?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, Wikimedia disambiguation page?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, song by Mac Miller?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, segment of an episode of Last Week Tonight?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] Donald Trump, character Donald Trump in Anthony Davis's opera The Central Park Five?",
"label": 0,
},
{
"bert_qry": "Is 'Donald Trump' in the context of: 'It's not the first scare for Donald Trump. In early March it was announced that Brazilian President Jair Bolsonaro, gave positive to coronavirus...', referring to [SEP] 2016 United States presidential election, 58th quadrennial U.S. presidential election?",
"label": 0,
}
]
```
Repo of the project:
https://github.com/Jordi-Ab/BERT_NED
|
{"language": ["en"], "pipeline_tag": "text-classification", "widget": [{"text": "Is 'AMLO' in the context of: 'Andres Manuel Lopez Obrador defends Mexico's military after report on missing students case', referring to [SEP] 'Andrés Manuel López Obrador, President of Mexico since 2018'?"}, {"text": "Is 'AMLO' in the context of: 'Andres Manuel Lopez Obrador defends Mexico's military after report on missing students case', referring to [SEP] 'Andrés Manuel López Obrador contenderá por la presidencia de México en 2012, Wikinews article'?"}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 45,327 |
asif00/bangla-llama-4bit
|
asif00
|
question-answering
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"question-answering",
"bn",
"dataset:iamshnoo/alpaca-cleaned-bengali",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 2024-05-25T02:36:58Z |
2024-05-25T16:18:05+00:00
| 29 | 0 |
---
base_model: unsloth/llama-3-8b-bnb-4bit
datasets:
- iamshnoo/alpaca-cleaned-bengali
language:
- bn
library_name: transformers
license: apache-2.0
pipeline_tag: question-answering
tags:
- transformers
- llama
- trl
- sft
---
Bangla LLaMA-4bit is a specialized model for context-based question answering and Bengali retrieval augment generation. It is derived from LLaMA 3 8B and trained on the iamshnoo/alpaca-cleaned-bengali dataset. This model is designed to provide accurate responses in Bengali with relevant contextual information. It is integrated with the transformers library, making it easy to use for context-based question answering and Bengali retrieval augment generation in projects.
# Model Details:
- Model Family: Llama 3 8B
- Language: Bengali
- Use Case: Context-Based Question Answering, Bengali Retrieval Augment Generation
- Dataset: iamshnoo/alpaca-cleaned-bengali (51,760 samples)
- Training Loss: 0.4038
- Global Steps: 647
- Batch Size: 80
- Epoch: 1
# How to Use:
You can use the model with a pipeline for a high-level helper or load the model directly. Here's how:
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="asif00/bangla-llama-4bit")
```
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("asif00/bangla-llama-4bit")
model = AutoModelForCausalLM.from_pretrained("asif00/bangla-llama-4bit")
```
# General Prompt Structure:
```python
prompt = """Below is an instruction in Bengali language that describes a task, paired with an input also in Bengali language that provides further context. Write a response in Bengali language that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
"""
```
# To get a cleaned up version of the response, you can use the `generate_response` function:
```python
def generate_response(question, context):
inputs = tokenizer([prompt.format(question, context, "")], return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1024, use_cache=True)
responses = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
response_start = responses.find("### Response:") + len("### Response:")
response = responses[response_start:].strip()
return response
```
# Example Usage:
```python
question = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর মৃত্যু কবে হয় ?"
context = "২০১৬ সালের ২৩ জুলাই হৃদরোগে আক্রান্ত হয়ে মহাশ্বেতা দেবী কলকাতার বেল ভিউ ক্লিনিকে ভর্তি হন। সেই বছরই ২৮ জুলাই একাধিক অঙ্গ বিকল হয়ে তাঁর মৃত্যু ঘটে। তিনি মধুমেহ, সেপ্টিসেমিয়া ও মূত্র সংক্রমণ রোগেও ভুগছিলেন।"
answer = generate_response(question, context)
print(answer)
```
# Disclaimer:
The Bangla LLaMA-4bit model has been trained on a limited dataset, and its responses may not always be perfect or accurate. The model's performance is dependent on the quality and quantity of the data it has been trained on. Given more resources, such as high-quality data and longer training time, the model's performance can be significantly improved.
# Resources:
Work in progress...
| null |
Non_BioNLP
|
Bangla LLaMA-4bit is a specialized model for context-based question answering and Bengali retrieval augment generation. It is derived from LLaMA 3 8B and trained on the iamshnoo/alpaca-cleaned-bengali dataset. This model is designed to provide accurate responses in Bengali with relevant contextual information. It is integrated with the transformers library, making it easy to use for context-based question answering and Bengali retrieval augment generation in projects.
# Model Details:
- Model Family: Llama 3 8B
- Language: Bengali
- Use Case: Context-Based Question Answering, Bengali Retrieval Augment Generation
- Dataset: iamshnoo/alpaca-cleaned-bengali (51,760 samples)
- Training Loss: 0.4038
- Global Steps: 647
- Batch Size: 80
- Epoch: 1
# How to Use:
You can use the model with a pipeline for a high-level helper or load the model directly. Here's how:
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="asif00/bangla-llama-4bit")
```
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("asif00/bangla-llama-4bit")
model = AutoModelForCausalLM.from_pretrained("asif00/bangla-llama-4bit")
```
# General Prompt Structure:
```python
prompt = """Below is an instruction in Bengali language that describes a task, paired with an input also in Bengali language that provides further context. Write a response in Bengali language that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
"""
```
# To get a cleaned up version of the response, you can use the `generate_response` function:
```python
def generate_response(question, context):
inputs = tokenizer([prompt.format(question, context, "")], return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1024, use_cache=True)
responses = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
response_start = responses.find("### Response:") + len("### Response:")
response = responses[response_start:].strip()
return response
```
# Example Usage:
```python
question = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর মৃত্যু কবে হয় ?"
context = "২০১৬ সালের ২৩ জুলাই হৃদরোগে আক্রান্ত হয়ে মহাশ্বেতা দেবী কলকাতার বেল ভিউ ক্লিনিকে ভর্তি হন। সেই বছরই ২৮ জুলাই একাধিক অঙ্গ বিকল হয়ে তাঁর মৃত্যু ঘটে। তিনি মধুমেহ, সেপ্টিসেমিয়া ও মূত্র সংক্রমণ রোগেও ভুগছিলেন।"
answer = generate_response(question, context)
print(answer)
```
# Disclaimer:
The Bangla LLaMA-4bit model has been trained on a limited dataset, and its responses may not always be perfect or accurate. The model's performance is dependent on the quality and quantity of the data it has been trained on. Given more resources, such as high-quality data and longer training time, the model's performance can be significantly improved.
# Resources:
Work in progress...
|
{"base_model": "unsloth/llama-3-8b-bnb-4bit", "datasets": ["iamshnoo/alpaca-cleaned-bengali"], "language": ["bn"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "question-answering", "tags": ["transformers", "llama", "trl", "sft"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,328 |
ydongs/distilbert-base-uncased-finetuned-cola
|
ydongs
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-09-04T14:18:30Z |
2022-09-04T14:27:12+00:00
| 12 | 0 |
---
datasets:
- glue
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- type: matthews_correlation
value: 0.5274949902750498
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8024
- Matthews Correlation: 0.5275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5261 | 1.0 | 535 | 0.5320 | 0.4152 |
| 0.3482 | 2.0 | 1070 | 0.4960 | 0.5049 |
| 0.2364 | 3.0 | 1605 | 0.6204 | 0.5123 |
| 0.186 | 4.0 | 2140 | 0.7605 | 0.5232 |
| 0.139 | 5.0 | 2675 | 0.8024 | 0.5275 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8024
- Matthews Correlation: 0.5275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5261 | 1.0 | 535 | 0.5320 | 0.4152 |
| 0.3482 | 2.0 | 1070 | 0.4960 | 0.5049 |
| 0.2364 | 3.0 | 1605 | 0.6204 | 0.5123 |
| 0.186 | 4.0 | 2140 | 0.7605 | 0.5232 |
| 0.139 | 5.0 | 2675 | 0.8024 | 0.5275 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "train", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5274949902750498, "name": "Matthews Correlation"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,329 |
sana-ngu/t5-small-finetuned-summarize-scientific-articles
|
sana-ngu
|
text2text-generation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-05-12T19:14:02Z |
2023-05-12T20:02:49+00:00
| 26 | 2 |
---
{}
---
# How to use
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="sana-ngu/t5-small-finetuned-summarize-scientific-articles")
article = "The novel Coronavirus disease (COVID-19), caused by the severe acute respiratory syndrome coronavirus—2 (SARS-CoV-2), in Africa is characterised by a more substantial proportion of asymptomatic (or mildly symptomatic) individuals thought to be playing a role in the spread of the infection. The exact proportion and degree of infectiousness of asymptomatic individuals remains unclear. Studies however indicate that their management is crucial for control of SARS-CoV-2 transmission.
We developed a simplified deterministic susceptible-exposed-infectious-removed (SEIR) mathematical model to assess the effect of active isolation of SARS-CoV-2 infected but asymptomatic individuals through blanket testing for control of the outbreak in Lusaka Province of Zambia. Here we modelled two scenarios; (1) assuming asymptomatic individuals comprised 70% of all COVID-19 cases and (2) asymptomatic individuals comprised only 50% of the cases. For contrast, the model was assessed first under the assumption that asymptomatic individuals are equally as infectious as symptomatic individuals and then secondly, and more likely, assuming asymptomatic individuals are only half as infectious as symptomatic individuals.
For the model assuming 70% asymptomatic cases, a minimum sustained daily blanket testing rate of ≥ 7911 tests/100000 population was sufficient to control the outbreak if asymptomatic individuals are only half as infectious while if equal infectiousness was assumed then a testing rate of ≥ 10028 tests/ 100000 population would be required. For 50% asymptomatic, minimum blanket testing rates of ≥ 4540 tests/ 100000 population was sufficient to control the outbreak at both assumed levels of infectiousness for asymptomatic individuals relative to symptomatic individuals.
Discussion and conclusion
Our model predicts that active isolation of COVID-19 cases, including asymptomatic individuals, through blanket testing can be used as a possible measure for the control of the SARS-Cov-2 transmission in Lusaka, Zambia, but it would come at a high cost."
summarizer(conversation)
```
| null |
BioNLP
|
# How to use
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="sana-ngu/t5-small-finetuned-summarize-scientific-articles")
article = "The novel Coronavirus disease (COVID-19), caused by the severe acute respiratory syndrome coronavirus—2 (SARS-CoV-2), in Africa is characterised by a more substantial proportion of asymptomatic (or mildly symptomatic) individuals thought to be playing a role in the spread of the infection. The exact proportion and degree of infectiousness of asymptomatic individuals remains unclear. Studies however indicate that their management is crucial for control of SARS-CoV-2 transmission.
We developed a simplified deterministic susceptible-exposed-infectious-removed (SEIR) mathematical model to assess the effect of active isolation of SARS-CoV-2 infected but asymptomatic individuals through blanket testing for control of the outbreak in Lusaka Province of Zambia. Here we modelled two scenarios; (1) assuming asymptomatic individuals comprised 70% of all COVID-19 cases and (2) asymptomatic individuals comprised only 50% of the cases. For contrast, the model was assessed first under the assumption that asymptomatic individuals are equally as infectious as symptomatic individuals and then secondly, and more likely, assuming asymptomatic individuals are only half as infectious as symptomatic individuals.
For the model assuming 70% asymptomatic cases, a minimum sustained daily blanket testing rate of ≥ 7911 tests/100000 population was sufficient to control the outbreak if asymptomatic individuals are only half as infectious while if equal infectiousness was assumed then a testing rate of ≥ 10028 tests/ 100000 population would be required. For 50% asymptomatic, minimum blanket testing rates of ≥ 4540 tests/ 100000 population was sufficient to control the outbreak at both assumed levels of infectiousness for asymptomatic individuals relative to symptomatic individuals.
Discussion and conclusion
Our model predicts that active isolation of COVID-19 cases, including asymptomatic individuals, through blanket testing can be used as a possible measure for the control of the SARS-Cov-2 transmission in Lusaka, Zambia, but it would come at a high cost."
summarizer(conversation)
```
|
{}
|
task
|
[
"SUMMARIZATION"
] | 45,330 |
pkshatech/GLuCoSE-base-ja-v2
|
pkshatech
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"luke",
"sentence-similarity",
"feature-extraction",
"ja",
"dataset:hpprc/emb",
"dataset:hpprc/mqa-ja",
"dataset:google-research-datasets/paws-x",
"base_model:pkshatech/GLuCoSE-base-ja",
"base_model:finetune:pkshatech/GLuCoSE-base-ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-08-22T03:16:48Z |
2024-09-27T05:49:40+00:00
| 218,018 | 19 |
---
base_model: pkshatech/GLuCoSE-base-ja
datasets:
- hpprc/emb
- hpprc/mqa-ja
- google-research-datasets/paws-x
language:
- ja
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
widget: []
---
# GLuCoSE v2
This model is a general Japanese text embedding model, excelling in retrieval tasks. It can run on CPU and is designed to measure semantic similarity between sentences, as well as to function as a retrieval system for searching passages based on queries.
Key features:
- Specialized for retrieval tasks, it demonstrates the highest performance among similar size models in MIRACL and other tasks .
- Optimized for Japanese text processing
- Can run on CPU
During inference, the prefix "query: " or "passage: " is required. Please check the Usage section for details.
## Model Description
The model is based on [GLuCoSE](https://huggingface.co/pkshatech/GLuCoSE-base-ja) and fine-tuned through distillation using several large-scale embedding models and multi-stage contrastive learning.
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
## Usage
### Direct Usage (Sentence Transformers)
You can perform inference using SentenceTransformer with the following code:
```python
from sentence_transformers import SentenceTransformer
import torch.nn.functional as F
# Download from the 🤗 Hub
model = SentenceTransformer("pkshatech/GLuCoSE-base-ja-v2")
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
sentences = [
'query: PKSHAはどんな会社ですか?',
'passage: 研究開発したアルゴリズムを、多くの企業のソフトウエア・オペレーションに導入しています。',
'query: 日本で一番高い山は?',
'passage: 富士山(ふじさん)は、標高3776.12 m、日本最高峰(剣ヶ峰)の独立峰で、その優美な風貌は日本国外でも日本の象徴として広く知られている。',
]
embeddings = model.encode(sentences,convert_to_tensor=True)
print(embeddings.shape)
# [4, 768]
# Get the similarity scores for the embeddings
similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
print(similarities)
# [[1.0000, 0.6050, 0.4341, 0.5537],
# [0.6050, 1.0000, 0.5018, 0.6815],
# [0.4341, 0.5018, 1.0000, 0.7534],
# [0.5537, 0.6815, 0.7534, 1.0000]]
```
### Direct Usage (Transformers)
You can perform inference using Transformers with the following code:
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def mean_pooling(last_hidden_states: Tensor,attention_mask: Tensor) -> Tensor:
emb = last_hidden_states * attention_mask.unsqueeze(-1)
emb = emb.sum(dim=1) / attention_mask.sum(dim=1).unsqueeze(-1)
return emb
# Download from the 🤗 Hub
tokenizer = AutoTokenizer.from_pretrained("pkshatech/GLuCoSE-base-ja-v2")
model = AutoModel.from_pretrained("pkshatech/GLuCoSE-base-ja-v2")
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
sentences = [
'query: PKSHAはどんな会社ですか?',
'passage: 研究開発したアルゴリズムを、多くの企業のソフトウエア・オペレーションに導入しています。',
'query: 日本で一番高い山は?',
'passage: 富士山(ふじさん)は、標高3776.12 m、日本最高峰(剣ヶ峰)の独立峰で、その優美な風貌は日本国外でも日本の象徴として広く知られている。',
]
# Tokenize the input texts
batch_dict = tokenizer(sentences, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = mean_pooling(outputs.last_hidden_state, batch_dict['attention_mask'])
print(embeddings.shape)
# [4, 768]
# Get the similarity scores for the embeddings
similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
print(similarities)
# [[1.0000, 0.6050, 0.4341, 0.5537],
# [0.6050, 1.0000, 0.5018, 0.6815],
# [0.4341, 0.5018, 1.0000, 0.7534],
# [0.5537, 0.6815, 0.7534, 1.0000]]
```
## Training Details
The fine-tuning of GLuCoSE v2 is carried out through the following steps:
**Step 1: Ensemble distillation**
- The embedded representation was distilled using [E5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct), [gte-Qwen2](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct), and [mE5-large](https://huggingface.co/intfloat/multilingual-e5-large) as teacher models.
**Step 2: Contrastive learning**
- Triplets were created from [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88), [MNLI](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7), [PAWS-X](https://huggingface.co/datasets/paws-x), [JSeM](https://github.com/DaisukeBekki/JSeM) and [Mr.TyDi](https://huggingface.co/datasets/castorini/mr-tydi) and used for training.
- This training aimed to improve the overall performance as a sentence embedding model.
**Step 3: Search-specific contrastive learning**
- In order to make the model more robust to the retrieval task, additional two-stage training with QA and retrieval task was conducted.
- In the first stage, the synthetic dataset [auto-wiki-qa](https://huggingface.co/datasets/cl-nagoya/auto-wiki-qa) was used for training,
while in the second stage, [JQaRA](https://huggingface.co/datasets/hotchpotch/JQaRA), [MQA](https://huggingface.co/datasets/hpprc/mqa-ja), [Japanese Wikipedia Human Retrieval, Mr.TyDi,MIRACL, Quiz Works and Quiz No Mor](https://huggingface.co/datasets/hpprc/emb)i were used.
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Benchmarks
### Retrieval
Evaluated with [MIRACL-ja](https://huggingface.co/datasets/miracl/miracl), [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) , [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR) and [MLDR-ja](https://huggingface.co/datasets/Shitao/MLDR).
| Model | Size | MIRACL<br>Recall@5 | JQaRA<br>nDCG@10 | JaCWIR<br>MAP@10 | MLDR<br>nDCG@10 |
| :---: | :---: | :---: | :---: | :---: | :---: |
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.6B | 89.2 | 55.4 | **87.6** | 29.8 |
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 0.3B | 78.7 | 62.4 | 85.0 | **37.5** |
| | | | | | |
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | 84.2 | 47.2 | **85.3** | 25.4 |
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 0.1B | 74.3 | 58.1 | 84.6 | **35.3** |
| [pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | 53.3 | 30.8 | 68.6 | 25.2 |
| **GLuCoSE v2** | 0.1B | **85.5** | **60.6** | **85.3** | 33.8 |
Note: Results for OpenAI small embeddings in JQARA and JaCWIR are quoted from the [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) and [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR).
### JMTEB
Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
The average score is macro-average.
| Model | Size | Avg. | Retrieval | STS | Classification | Reranking | Clustering | PairClassification |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| OpenAI/text-embedding-3-small | - | 69.18 | 66.39 | 79.46 | 73.06 | 92.92 | 51.06 | 62.27 |
| OpenAI/text-embedding-3-large | - | 74.05 | 74.48 | 82.52 | 77.58 | 93.58 | 53.32 | 62.35 |
| | | | | | | | | |
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.6B | 70.90 | 70.98 | 79.70 | 72.89 | 92.96 | 51.24 | 62.15 |
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 0.3B | 73.31 | 73.02 | 83.13 | 77.43 | 92.99 | 51.82 | 62.29 |
| | | | | | | | | |
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | 68.61 | 68.21 | 79.84 | 69.30 | **92.85** | 48.26 | 62.26 |
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 0.1B | 71.91 | 69.82 | 82.87 | 75.58 | 92.91 | **54.16** | 62.38 |
| [pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | 67.29 | 59.02 | 78.71 | **76.82** | 91.90 | 49.78 | **66.39** |
| **GLuCoSE v2** | 0.1B | **72.23** | **73.36** | **82.96** | 74.21 | 93.01 | 48.65 | 62.37 |
Note: Results for OpenAI embeddings and multilingual-e5 models are quoted from the [JMTEB leaderboard](https://github.com/sbintuitions/JMTEB/blob/main/leaderboard.md). Results for ruri are quoted from the [cl-nagoya/ruri-base model card](https://huggingface.co/cl-nagoya/ruri-base/blob/main/README.md).
## Authors
Chihiro Yano, Mocho Go, Hideyuki Tachibana, Hiroto Takegawa, Yotaro Watanabe
## License
This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
| null |
Non_BioNLP
|
# GLuCoSE v2
This model is a general Japanese text embedding model, excelling in retrieval tasks. It can run on CPU and is designed to measure semantic similarity between sentences, as well as to function as a retrieval system for searching passages based on queries.
Key features:
- Specialized for retrieval tasks, it demonstrates the highest performance among similar size models in MIRACL and other tasks .
- Optimized for Japanese text processing
- Can run on CPU
During inference, the prefix "query: " or "passage: " is required. Please check the Usage section for details.
## Model Description
The model is based on [GLuCoSE](https://huggingface.co/pkshatech/GLuCoSE-base-ja) and fine-tuned through distillation using several large-scale embedding models and multi-stage contrastive learning.
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
## Usage
### Direct Usage (Sentence Transformers)
You can perform inference using SentenceTransformer with the following code:
```python
from sentence_transformers import SentenceTransformer
import torch.nn.functional as F
# Download from the 🤗 Hub
model = SentenceTransformer("pkshatech/GLuCoSE-base-ja-v2")
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
sentences = [
'query: PKSHAはどんな会社ですか?',
'passage: 研究開発したアルゴリズムを、多くの企業のソフトウエア・オペレーションに導入しています。',
'query: 日本で一番高い山は?',
'passage: 富士山(ふじさん)は、標高3776.12 m、日本最高峰(剣ヶ峰)の独立峰で、その優美な風貌は日本国外でも日本の象徴として広く知られている。',
]
embeddings = model.encode(sentences,convert_to_tensor=True)
print(embeddings.shape)
# [4, 768]
# Get the similarity scores for the embeddings
similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
print(similarities)
# [[1.0000, 0.6050, 0.4341, 0.5537],
# [0.6050, 1.0000, 0.5018, 0.6815],
# [0.4341, 0.5018, 1.0000, 0.7534],
# [0.5537, 0.6815, 0.7534, 1.0000]]
```
### Direct Usage (Transformers)
You can perform inference using Transformers with the following code:
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def mean_pooling(last_hidden_states: Tensor,attention_mask: Tensor) -> Tensor:
emb = last_hidden_states * attention_mask.unsqueeze(-1)
emb = emb.sum(dim=1) / attention_mask.sum(dim=1).unsqueeze(-1)
return emb
# Download from the 🤗 Hub
tokenizer = AutoTokenizer.from_pretrained("pkshatech/GLuCoSE-base-ja-v2")
model = AutoModel.from_pretrained("pkshatech/GLuCoSE-base-ja-v2")
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
sentences = [
'query: PKSHAはどんな会社ですか?',
'passage: 研究開発したアルゴリズムを、多くの企業のソフトウエア・オペレーションに導入しています。',
'query: 日本で一番高い山は?',
'passage: 富士山(ふじさん)は、標高3776.12 m、日本最高峰(剣ヶ峰)の独立峰で、その優美な風貌は日本国外でも日本の象徴として広く知られている。',
]
# Tokenize the input texts
batch_dict = tokenizer(sentences, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = mean_pooling(outputs.last_hidden_state, batch_dict['attention_mask'])
print(embeddings.shape)
# [4, 768]
# Get the similarity scores for the embeddings
similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
print(similarities)
# [[1.0000, 0.6050, 0.4341, 0.5537],
# [0.6050, 1.0000, 0.5018, 0.6815],
# [0.4341, 0.5018, 1.0000, 0.7534],
# [0.5537, 0.6815, 0.7534, 1.0000]]
```
## Training Details
The fine-tuning of GLuCoSE v2 is carried out through the following steps:
**Step 1: Ensemble distillation**
- The embedded representation was distilled using [E5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct), [gte-Qwen2](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct), and [mE5-large](https://huggingface.co/intfloat/multilingual-e5-large) as teacher models.
**Step 2: Contrastive learning**
- Triplets were created from [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88), [MNLI](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7), [PAWS-X](https://huggingface.co/datasets/paws-x), [JSeM](https://github.com/DaisukeBekki/JSeM) and [Mr.TyDi](https://huggingface.co/datasets/castorini/mr-tydi) and used for training.
- This training aimed to improve the overall performance as a sentence embedding model.
**Step 3: Search-specific contrastive learning**
- In order to make the model more robust to the retrieval task, additional two-stage training with QA and retrieval task was conducted.
- In the first stage, the synthetic dataset [auto-wiki-qa](https://huggingface.co/datasets/cl-nagoya/auto-wiki-qa) was used for training,
while in the second stage, [JQaRA](https://huggingface.co/datasets/hotchpotch/JQaRA), [MQA](https://huggingface.co/datasets/hpprc/mqa-ja), [Japanese Wikipedia Human Retrieval, Mr.TyDi,MIRACL, Quiz Works and Quiz No Mor](https://huggingface.co/datasets/hpprc/emb)i were used.
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Benchmarks
### Retrieval
Evaluated with [MIRACL-ja](https://huggingface.co/datasets/miracl/miracl), [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) , [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR) and [MLDR-ja](https://huggingface.co/datasets/Shitao/MLDR).
| Model | Size | MIRACL<br>Recall@5 | JQaRA<br>nDCG@10 | JaCWIR<br>MAP@10 | MLDR<br>nDCG@10 |
| :---: | :---: | :---: | :---: | :---: | :---: |
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.6B | 89.2 | 55.4 | **87.6** | 29.8 |
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 0.3B | 78.7 | 62.4 | 85.0 | **37.5** |
| | | | | | |
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | 84.2 | 47.2 | **85.3** | 25.4 |
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 0.1B | 74.3 | 58.1 | 84.6 | **35.3** |
| [pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | 53.3 | 30.8 | 68.6 | 25.2 |
| **GLuCoSE v2** | 0.1B | **85.5** | **60.6** | **85.3** | 33.8 |
Note: Results for OpenAI small embeddings in JQARA and JaCWIR are quoted from the [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) and [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR).
### JMTEB
Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
The average score is macro-average.
| Model | Size | Avg. | Retrieval | STS | Classification | Reranking | Clustering | PairClassification |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| OpenAI/text-embedding-3-small | - | 69.18 | 66.39 | 79.46 | 73.06 | 92.92 | 51.06 | 62.27 |
| OpenAI/text-embedding-3-large | - | 74.05 | 74.48 | 82.52 | 77.58 | 93.58 | 53.32 | 62.35 |
| | | | | | | | | |
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.6B | 70.90 | 70.98 | 79.70 | 72.89 | 92.96 | 51.24 | 62.15 |
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 0.3B | 73.31 | 73.02 | 83.13 | 77.43 | 92.99 | 51.82 | 62.29 |
| | | | | | | | | |
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | 68.61 | 68.21 | 79.84 | 69.30 | **92.85** | 48.26 | 62.26 |
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 0.1B | 71.91 | 69.82 | 82.87 | 75.58 | 92.91 | **54.16** | 62.38 |
| [pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | 67.29 | 59.02 | 78.71 | **76.82** | 91.90 | 49.78 | **66.39** |
| **GLuCoSE v2** | 0.1B | **72.23** | **73.36** | **82.96** | 74.21 | 93.01 | 48.65 | 62.37 |
Note: Results for OpenAI embeddings and multilingual-e5 models are quoted from the [JMTEB leaderboard](https://github.com/sbintuitions/JMTEB/blob/main/leaderboard.md). Results for ruri are quoted from the [cl-nagoya/ruri-base model card](https://huggingface.co/cl-nagoya/ruri-base/blob/main/README.md).
## Authors
Chihiro Yano, Mocho Go, Hideyuki Tachibana, Hiroto Takegawa, Yotaro Watanabe
## License
This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|
{"base_model": "pkshatech/GLuCoSE-base-ja", "datasets": ["hpprc/emb", "hpprc/mqa-ja", "google-research-datasets/paws-x"], "language": ["ja"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction"], "widget": []}
|
task
|
[
"SEMANTIC_SIMILARITY"
] | 45,331 |
jondurbin/airoboros-l2-7b-2.2.1
|
jondurbin
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-2.2.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-09-20T17:26:33Z |
2023-09-21T18:39:31+00:00
| 1,898 | 3 |
---
datasets:
- jondurbin/airoboros-2.2.1
license: llama2
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
This is essentially a minor "fix" branch of [airoboros-l2-7b-2.2](https://hf.co/jondurbin/airoboros-l2-7b-2.2) with a updates, primarily:
- [re-generated writing responses](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#re-generated-writing-responses)
- [longer contextual blocks](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#longer-contextual-blocks)
- [removal of "rp" data](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#rp-category-removed)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://wandb.ai/jondurbin/airoboros-l2-7b-2.2.1/runs/ka6jlcj7?workspace=user-jondurbin
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
| null |
Non_BioNLP
|
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
This is essentially a minor "fix" branch of [airoboros-l2-7b-2.2](https://hf.co/jondurbin/airoboros-l2-7b-2.2) with a updates, primarily:
- [re-generated writing responses](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#re-generated-writing-responses)
- [longer contextual blocks](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#longer-contextual-blocks)
- [removal of "rp" data](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#rp-category-removed)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://wandb.ai/jondurbin/airoboros-l2-7b-2.2.1/runs/ka6jlcj7?workspace=user-jondurbin
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
{"datasets": ["jondurbin/airoboros-2.2.1"], "license": "llama2"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,332 |
naimur900/pegasus_xlsum
|
naimur900
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"en",
"dataset:xlsum",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-26T07:51:12Z |
2023-09-19T13:44:32+00:00
| 64 | 3 |
---
base_model: google/pegasus-cnn_dailymail
datasets:
- xlsum
language:
- en
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: pegsasus_xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegsasus_xlsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the [**csebuetnlp/xlsum/english**](https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/english/train) dataset.
## Model description
The **pegasus_xlsum** is a state-of-the-art model fine-tuned on the **English** subset of the [**csebuetnlp/xlsum**](https://huggingface.co/datasets/csebuetnlp/xlsum) dataset. This dataset is one of the most comprehensive and diverse sets available, originally composed of **1.35 million** professional article-summary pairs sourced from the BBC across 45 languages. Despite its multilingual nature, we intentionally selected the **English** language subset, consisting of approximately **330k** records, as the focus for our fine-tuning process.
The goal was to adapt the model for the text summarization task, and we're thrilled to report that the fine-tuned **pegasus_xlsum** model exceeded our expectations. It outperformed the established [**csebuetnlp/mT5_multilingual_XLSum**](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) model in terms of [**ROUGE**](https://huggingface.co/spaces/evaluate-metric/rouge) scores, demonstrating superior summary generation capabilities. The **pegasus_xlsum** model leverages the powerful PEGASUS architecture, proving its efficiency and effectiveness in handling **English** text summarization tasks.
## Intended uses
The **pegasus_xlsum** is to provide a reliable, high-performance solution for **English** text summarization, making the most of the rich, professional, and diverse source dataset it was trained on. We hope you find this model as useful in your applications as we did in our experiments.
## Benchmark comparisons
**pegasus_xlsum**
| Metric | Score |
|--------|-------|
| **ROUGE-1** | 39.121 |
| **ROUGE-2** | 17.467 |
| **ROUGE-L** | 30.894 |
| **ROUGE-LSUM** | 30.892 |
**csebuetnlp/mT5_multilingual_XLSum**
| Metric | Score |
|--------|-------|
| **ROUGE-1** | 37.601 |
| **ROUGE-2** | 15.153 |
| **ROUGE-L** | 29.881 |
| **ROUGE-LSUM** | -- |
## Training and evaluation data
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
## Authors
- Naimur Rahman ([@NaimurRahman](https://github.com/naimur900))
- Sadiul Arefin Rafi ([@SadiulArefinRafi](https://github.com/SadiulArefin))
- Ha-mim Ahmad ([@HaMimAhmad](https://github.com/hamimz))
- Kazi Nazibul Islam ([@KaziNazibulIslam](https://github.com/nazibislam))
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegsasus_xlsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the [**csebuetnlp/xlsum/english**](https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/english/train) dataset.
## Model description
The **pegasus_xlsum** is a state-of-the-art model fine-tuned on the **English** subset of the [**csebuetnlp/xlsum**](https://huggingface.co/datasets/csebuetnlp/xlsum) dataset. This dataset is one of the most comprehensive and diverse sets available, originally composed of **1.35 million** professional article-summary pairs sourced from the BBC across 45 languages. Despite its multilingual nature, we intentionally selected the **English** language subset, consisting of approximately **330k** records, as the focus for our fine-tuning process.
The goal was to adapt the model for the text summarization task, and we're thrilled to report that the fine-tuned **pegasus_xlsum** model exceeded our expectations. It outperformed the established [**csebuetnlp/mT5_multilingual_XLSum**](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) model in terms of [**ROUGE**](https://huggingface.co/spaces/evaluate-metric/rouge) scores, demonstrating superior summary generation capabilities. The **pegasus_xlsum** model leverages the powerful PEGASUS architecture, proving its efficiency and effectiveness in handling **English** text summarization tasks.
## Intended uses
The **pegasus_xlsum** is to provide a reliable, high-performance solution for **English** text summarization, making the most of the rich, professional, and diverse source dataset it was trained on. We hope you find this model as useful in your applications as we did in our experiments.
## Benchmark comparisons
**pegasus_xlsum**
| Metric | Score |
|--------|-------|
| **ROUGE-1** | 39.121 |
| **ROUGE-2** | 17.467 |
| **ROUGE-L** | 30.894 |
| **ROUGE-LSUM** | 30.892 |
**csebuetnlp/mT5_multilingual_XLSum**
| Metric | Score |
|--------|-------|
| **ROUGE-1** | 37.601 |
| **ROUGE-2** | 15.153 |
| **ROUGE-L** | 29.881 |
| **ROUGE-LSUM** | -- |
## Training and evaluation data
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
## Authors
- Naimur Rahman ([@NaimurRahman](https://github.com/naimur900))
- Sadiul Arefin Rafi ([@SadiulArefinRafi](https://github.com/SadiulArefin))
- Ha-mim Ahmad ([@HaMimAhmad](https://github.com/hamimz))
- Kazi Nazibul Islam ([@KaziNazibulIslam](https://github.com/nazibislam))
|
{"base_model": "google/pegasus-cnn_dailymail", "datasets": ["xlsum"], "language": ["en"], "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "pegsasus_xlsum", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,333 |
Tevatron/dse-phi3-docmatix-v1
|
Tevatron
| null |
[
"Tevatron",
"pytorch",
"phi3_v",
"vidore",
"custom_code",
"en",
"dataset:Tevatron/docmatix-ir",
"dataset:HuggingFaceM4/Docmatix",
"dataset:Tevatron/msmarco-passage-aug",
"arxiv:2406.11251",
"license:mit",
"region:us"
] | 2024-07-24T03:37:51Z |
2024-08-12T07:58:30+00:00
| 93 | 9 |
---
datasets:
- Tevatron/docmatix-ir
- HuggingFaceM4/Docmatix
- Tevatron/msmarco-passage-aug
language:
- en
library_name: Tevatron
license: mit
tags:
- vidore
---
# DSE-Phi3-Docmatix-V1
DSE-Phi3-Docmatix-V1 is a bi-encoder model designed to encode document screenshots into dense vectors for document retrieval. The Document Screenshot Embedding ([DSE](https://arxiv.org/abs/2406.11251)) approach captures documents in their original visual format, preserving all information such as text, images, and layout, thus avoiding tedious parsing and potential information loss.
The model, `Tevatron/dse-phi3-docmatix-v1`, is trained using 1/10 of the `Tevatron/docmatix-ir` dataset, a variant of `HuggingFaceM4/Docmatix` specifically adapted for training PDF retrievers with Vision Language Models in open-domain question answering scenarios. For more information on dataset filtering and hard negative mining, refer to the [docmatix-ir](https://huggingface.co/datasets/Tevatron/docmatix-ir/blob/main/README.md) dataset page.
DSE has strong zero-shot effectiveness for document retrieval both with visual input and text input.
For example, DSE-Phi3-Docmatix-V1 achieves 74.1 nDCG@5 on [ViDoRE](https://huggingface.co/spaces/vidore/vidore-leaderboard) leaderboard in **zero-shot setting** (without finetuning with ViDoRe training data).
## How to train the model from scratch
Please see https://github.com/texttron/tevatron/tree/main/examples/dse
## How to Use the Model
### Load the Model and Processor
```python
import torch
from transformers import AutoProcessor, AutoModelForCausalLM
processor = AutoProcessor.from_pretrained('Tevatron/dse-phi3-docmatix-v1', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('Tevatron/dse-phi3-docmatix-v1', trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, use_cache=False).to('cuda:0')
def get_embedding(last_hidden_state: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
sequence_lengths = attention_mask.sum(dim=1) - 1
bs = last_hidden_state.shape[0]
reps = last_hidden_state[torch.arange(bs, device=last_hidden_state.device), sequence_lengths]
reps = torch.nn.functional.normalize(reps, p=2, dim=-1)
return reps
```
### Encode Text Query
```python
queries = ["query: Where can we see Llama?</s>", "query: What is LLaMA model?</s>"]
query_inputs = processor(queries, return_tensors="pt", padding="longest", max_length=128, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**query_inputs, return_dict=True, output_hidden_states=True)
query_embeddings = get_embedding(output.hidden_states[-1], query_inputs["attention_mask"])
```
### Encode Document Screenshot
```python
from PIL import Image
import requests
from io import BytesIO
# URLs of the images
url1 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v1/resolve/main/animal-llama.png"
url2 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v1/resolve/main/meta-llama.png"
# Download and open images
response1 = requests.get(url1)
response2 = requests.get(url2)
passage_image1 = Image.open(BytesIO(response1.content))
passage_image2 = Image.open(BytesIO(response2.content))
passage_images = [passage_image1, passage_image2]
passage_prompts = ["<|image_1|>\nWhat is shown in this image?</s>", "<|image_2|>\nWhat is shown in this image?</s>"]
# Process inputs and get embeddings
passage_inputs = processor(passage_prompts, images=passage_images, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
passage_inputs['input_ids'] = passage_inputs['input_ids'].squeeze(0)
passage_inputs['attention_mask'] = passage_inputs['attention_mask'].squeeze(0)
passage_inputs['image_sizes'] = passage_inputs['image_sizes'].squeeze(0)
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
```
### Compute Similarity
```python
from torch.nn.functional import cosine_similarity
num_queries = query_embeddings.size(0)
num_passages = doc_embeddings.size(0)
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Encode Document Text
This DSE checkpoint is warm-up with `Tevatron/msmarco-passage-aug`, thus the model can also effectively encode document as text input.
```python
passage_prompts = [
"The llama (/ˈlɑːmə/; Spanish pronunciation: [ˈʎama] or [ˈʝama]) (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.</s>",
"Llama (acronym for Large Language Model Meta AI, and formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023.[2][3] The latest version is Llama 3.1, released in July 2024.[4]</s>"
]
passage_inputs = processor(passage_prompts, images=None, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Citation
If you find this checkpoint is helpful, please consider cite Phi3, Docmatix and our DSE work.
| null |
Non_BioNLP
|
# DSE-Phi3-Docmatix-V1
DSE-Phi3-Docmatix-V1 is a bi-encoder model designed to encode document screenshots into dense vectors for document retrieval. The Document Screenshot Embedding ([DSE](https://arxiv.org/abs/2406.11251)) approach captures documents in their original visual format, preserving all information such as text, images, and layout, thus avoiding tedious parsing and potential information loss.
The model, `Tevatron/dse-phi3-docmatix-v1`, is trained using 1/10 of the `Tevatron/docmatix-ir` dataset, a variant of `HuggingFaceM4/Docmatix` specifically adapted for training PDF retrievers with Vision Language Models in open-domain question answering scenarios. For more information on dataset filtering and hard negative mining, refer to the [docmatix-ir](https://huggingface.co/datasets/Tevatron/docmatix-ir/blob/main/README.md) dataset page.
DSE has strong zero-shot effectiveness for document retrieval both with visual input and text input.
For example, DSE-Phi3-Docmatix-V1 achieves 74.1 nDCG@5 on [ViDoRE](https://huggingface.co/spaces/vidore/vidore-leaderboard) leaderboard in **zero-shot setting** (without finetuning with ViDoRe training data).
## How to train the model from scratch
Please see https://github.com/texttron/tevatron/tree/main/examples/dse
## How to Use the Model
### Load the Model and Processor
```python
import torch
from transformers import AutoProcessor, AutoModelForCausalLM
processor = AutoProcessor.from_pretrained('Tevatron/dse-phi3-docmatix-v1', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('Tevatron/dse-phi3-docmatix-v1', trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, use_cache=False).to('cuda:0')
def get_embedding(last_hidden_state: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
sequence_lengths = attention_mask.sum(dim=1) - 1
bs = last_hidden_state.shape[0]
reps = last_hidden_state[torch.arange(bs, device=last_hidden_state.device), sequence_lengths]
reps = torch.nn.functional.normalize(reps, p=2, dim=-1)
return reps
```
### Encode Text Query
```python
queries = ["query: Where can we see Llama?</s>", "query: What is LLaMA model?</s>"]
query_inputs = processor(queries, return_tensors="pt", padding="longest", max_length=128, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**query_inputs, return_dict=True, output_hidden_states=True)
query_embeddings = get_embedding(output.hidden_states[-1], query_inputs["attention_mask"])
```
### Encode Document Screenshot
```python
from PIL import Image
import requests
from io import BytesIO
# URLs of the images
url1 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v1/resolve/main/animal-llama.png"
url2 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v1/resolve/main/meta-llama.png"
# Download and open images
response1 = requests.get(url1)
response2 = requests.get(url2)
passage_image1 = Image.open(BytesIO(response1.content))
passage_image2 = Image.open(BytesIO(response2.content))
passage_images = [passage_image1, passage_image2]
passage_prompts = ["<|image_1|>\nWhat is shown in this image?</s>", "<|image_2|>\nWhat is shown in this image?</s>"]
# Process inputs and get embeddings
passage_inputs = processor(passage_prompts, images=passage_images, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
passage_inputs['input_ids'] = passage_inputs['input_ids'].squeeze(0)
passage_inputs['attention_mask'] = passage_inputs['attention_mask'].squeeze(0)
passage_inputs['image_sizes'] = passage_inputs['image_sizes'].squeeze(0)
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
```
### Compute Similarity
```python
from torch.nn.functional import cosine_similarity
num_queries = query_embeddings.size(0)
num_passages = doc_embeddings.size(0)
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Encode Document Text
This DSE checkpoint is warm-up with `Tevatron/msmarco-passage-aug`, thus the model can also effectively encode document as text input.
```python
passage_prompts = [
"The llama (/ˈlɑːmə/; Spanish pronunciation: [ˈʎama] or [ˈʝama]) (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.</s>",
"Llama (acronym for Large Language Model Meta AI, and formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023.[2][3] The latest version is Llama 3.1, released in July 2024.[4]</s>"
]
passage_inputs = processor(passage_prompts, images=None, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Citation
If you find this checkpoint is helpful, please consider cite Phi3, Docmatix and our DSE work.
|
{"datasets": ["Tevatron/docmatix-ir", "HuggingFaceM4/Docmatix", "Tevatron/msmarco-passage-aug"], "language": ["en"], "library_name": "Tevatron", "license": "mit", "tags": ["vidore"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,334 |
onnx-community/tiny-random-MarianMTModel
|
onnx-community
|
translation
|
[
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:hf-internal-testing/tiny-random-MarianMTModel",
"base_model:quantized:hf-internal-testing/tiny-random-MarianMTModel",
"region:us"
] | 2024-08-27T16:34:10Z |
2024-10-08T13:54:00+00:00
| 231 | 0 |
---
base_model: hf-internal-testing/tiny-random-MarianMTModel
library_name: transformers.js
pipeline_tag: translation
---
https://huggingface.co/hf-internal-testing/tiny-random-MarianMTModel with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
| null |
Non_BioNLP
| ERROR: type should be string, got "\nhttps://huggingface.co/hf-internal-testing/tiny-random-MarianMTModel with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)." |
{"base_model": "hf-internal-testing/tiny-random-MarianMTModel", "library_name": "transformers.js", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 45,335 |
anirudh21/bert-base-uncased-finetuned-mrpc
|
anirudh21
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-01-27T05:26:21+00:00
| 114 | 1 |
---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.7916666666666666
name: Accuracy
- type: f1
value: 0.8590381426202321
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Accuracy: 0.7917
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.5387 | 0.7402 | 0.8349 |
| No log | 2.0 | 126 | 0.5770 | 0.7696 | 0.8513 |
| No log | 3.0 | 189 | 0.5357 | 0.7574 | 0.8223 |
| No log | 4.0 | 252 | 0.6645 | 0.7917 | 0.8590 |
| No log | 5.0 | 315 | 0.6977 | 0.7721 | 0.8426 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Accuracy: 0.7917
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.5387 | 0.7402 | 0.8349 |
| No log | 2.0 | 126 | 0.5770 | 0.7696 | 0.8513 |
| No log | 3.0 | 189 | 0.5357 | 0.7574 | 0.8223 |
| No log | 4.0 | 252 | 0.6645 | 0.7917 | 0.8590 |
| No log | 5.0 | 315 | 0.6977 | 0.7721 | 0.8426 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.7916666666666666, "name": "Accuracy"}, {"type": "f1", "value": 0.8590381426202321, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,336 |
Helsinki-NLP/opus-mt-rnd-en
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rnd",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T12:03:07+00:00
| 46 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-rnd-en
* source languages: rnd
* target languages: en
* OPUS readme: [rnd-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.en | 37.8 | 0.531 |
| null |
Non_BioNLP
|
### opus-mt-rnd-en
* source languages: rnd
* target languages: en
* OPUS readme: [rnd-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.en | 37.8 | 0.531 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 45,337 |
learn2pro/distilbert-base-uncased_emotion_ft_learn2pro
|
learn2pro
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-24T04:38:08Z |
2023-06-24T05:17:15+00:00
| 12 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
- precision
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased_emotion_ft_learn2pro
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.937
name: Accuracy
- type: f1
value: 0.9372926688327409
name: F1
- type: precision
value: 0.9097477369572983
name: Precision
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_learn2pro
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1427
- Accuracy: 0.937
- F1: 0.9373
- Precision: 0.9097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| 0.7939 | 1.0 | 250 | 0.2551 | 0.9115 | 0.9095 | 0.8923 |
| 0.2063 | 2.0 | 500 | 0.1629 | 0.931 | 0.9310 | 0.9116 |
| 0.1384 | 3.0 | 750 | 0.1491 | 0.9375 | 0.9380 | 0.9073 |
| 0.1099 | 4.0 | 1000 | 0.1427 | 0.937 | 0.9373 | 0.9097 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_learn2pro
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1427
- Accuracy: 0.937
- F1: 0.9373
- Precision: 0.9097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| 0.7939 | 1.0 | 250 | 0.2551 | 0.9115 | 0.9095 | 0.8923 |
| 0.2063 | 2.0 | 500 | 0.1629 | 0.931 | 0.9310 | 0.9116 |
| 0.1384 | 3.0 | 750 | 0.1491 | 0.9375 | 0.9380 | 0.9073 |
| 0.1099 | 4.0 | 1000 | 0.1427 | 0.937 | 0.9373 | 0.9097 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1", "precision"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased_emotion_ft_learn2pro", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.937, "name": "Accuracy"}, {"type": "f1", "value": 0.9372926688327409, "name": "F1"}, {"type": "precision", "value": 0.9097477369572983, "name": "Precision"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,338 |
nazhan/bge-small-en-v1.5-brahmaputra-iter-10-2nd
|
nazhan
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"region:us"
] | 2024-09-11T06:02:26Z |
2024-09-11T06:02:30+00:00
| 14 | 0 |
---
base_model: BAAI/bge-small-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Show me data_asset_kpi_cf group by quarter.
- text: i want to get trend analysis and group by product
- text: Can I have data_asset_kpi_bs details.
- text: I don't want to produce that specific data.
- text: What are the details of the orders placed before December 31st, 2023?
inference: true
model-index:
- name: SetFit with BAAI/bge-small-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9915254237288136
name: Accuracy
---
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Lookup | <ul><li>"Show me the products with 'Tablet' in the name and filter by price above 200."</li><li>'Can you get me the products with a price above 100?'</li><li>'Filter by employees with a salary above 60,000 and show me their first names.'</li></ul> |
| Aggregation | <ul><li>'What’s the total revenue generated by each employee in 2023?'</li><li>'Get me data_asset_001_pcc group by category.'</li><li>'Show me max revenue'</li></ul> |
| Tablejoin | <ul><li>'Show me a merge of key performance metrics and cash flow.'</li><li>'How can I integrate the Customers and Orders tables to identify customers with multiple recent orders?'</li><li>'Can you integrate data from the Products and Orders tables to determine the revenue generated by each product?'</li></ul> |
| Viewtables | <ul><li>'How can I view all of the tables stored within the starhub_data_asset database?'</li><li>'What are the tables that I can access in the starhub_data_asset database?'</li><li>'What are the available tables that are relevant to pricing strategies within starhub_data_asset database?'</li></ul> |
| Lookup_1 | <ul><li>'Display data_asset_kpi_cf.'</li><li>'Get me data_asset_001_ta trend history.'</li><li>'Show me data_asset_kpi_cf details.'</li></ul> |
| Rejection | <ul><li>"I don't want to apply any filters now."</li><li>"I don't want to apply any filters to this."</li><li>"I'd prefer not to apply any filters."</li></ul> |
| Generalreply | <ul><li>"What's your favorite TV show of all time?"</li><li>"i'll start dinner at 6:00."</li><li>"Oh, that's a tough one! There are so many good memories to choose from. But if I had to pick just one, I think it would be spending summers at my grandparent's house. We would play board games, make homemade ice cream, and have big family dinners. It was always so much fun!"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9915 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("nazhan/bge-small-en-v1.5-brahmaputra-iter-10-2nd")
# Run inference
preds = model("Can I have data_asset_kpi_bs details.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 8.8375 | 62 |
| Label | Training Sample Count |
|:-------------|:----------------------|
| Tablejoin | 122 |
| Rejection | 69 |
| Aggregation | 287 |
| Lookup | 59 |
| Generalreply | 71 |
| Viewtables | 79 |
| Lookup_1 | 156 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.2355 | - |
| 0.0014 | 50 | 0.2202 | - |
| 0.0028 | 100 | 0.1664 | - |
| 0.0042 | 150 | 0.216 | - |
| 0.0056 | 200 | 0.2341 | - |
| 0.0070 | 250 | 0.2279 | - |
| 0.0084 | 300 | 0.1786 | - |
| 0.0098 | 350 | 0.1603 | - |
| 0.0112 | 400 | 0.0821 | - |
| 0.0126 | 450 | 0.1498 | - |
| 0.0140 | 500 | 0.0942 | - |
| 0.0155 | 550 | 0.0999 | - |
| 0.0169 | 600 | 0.0895 | - |
| 0.0183 | 650 | 0.0841 | - |
| 0.0197 | 700 | 0.1433 | - |
| 0.0211 | 750 | 0.0808 | - |
| 0.0225 | 800 | 0.0346 | - |
| 0.0239 | 850 | 0.0556 | - |
| 0.0253 | 900 | 0.0755 | - |
| 0.0267 | 950 | 0.0346 | - |
| 0.0281 | 1000 | 0.0486 | - |
| 0.0295 | 1050 | 0.0207 | - |
| 0.0309 | 1100 | 0.0126 | - |
| 0.0323 | 1150 | 0.0113 | - |
| 0.0337 | 1200 | 0.0076 | - |
| 0.0351 | 1250 | 0.0082 | - |
| 0.0365 | 1300 | 0.0142 | - |
| 0.0379 | 1350 | 0.011 | - |
| 0.0393 | 1400 | 0.0034 | - |
| 0.0407 | 1450 | 0.0123 | - |
| 0.0421 | 1500 | 0.0062 | - |
| 0.0435 | 1550 | 0.0021 | - |
| 0.0449 | 1600 | 0.005 | - |
| 0.0464 | 1650 | 0.0124 | - |
| 0.0478 | 1700 | 0.0026 | - |
| 0.0492 | 1750 | 0.0029 | - |
| 0.0506 | 1800 | 0.0023 | - |
| 0.0520 | 1850 | 0.0017 | - |
| 0.0534 | 1900 | 0.0027 | - |
| 0.0548 | 1950 | 0.0017 | - |
| 0.0562 | 2000 | 0.0043 | - |
| 0.0576 | 2050 | 0.0018 | - |
| 0.0590 | 2100 | 0.0032 | - |
| 0.0604 | 2150 | 0.0022 | - |
| 0.0618 | 2200 | 0.0052 | - |
| 0.0632 | 2250 | 0.0025 | - |
| 0.0646 | 2300 | 0.0018 | - |
| 0.0660 | 2350 | 0.0016 | - |
| 0.0674 | 2400 | 0.0016 | - |
| 0.0688 | 2450 | 0.001 | - |
| 0.0702 | 2500 | 0.0015 | - |
| 0.0716 | 2550 | 0.0013 | - |
| 0.0730 | 2600 | 0.0012 | - |
| 0.0744 | 2650 | 0.0012 | - |
| 0.0759 | 2700 | 0.0017 | - |
| 0.0773 | 2750 | 0.0016 | - |
| 0.0787 | 2800 | 0.0018 | - |
| 0.0801 | 2850 | 0.0007 | - |
| 0.0815 | 2900 | 0.0008 | - |
| 0.0829 | 2950 | 0.0016 | - |
| 0.0843 | 3000 | 0.0008 | - |
| 0.0857 | 3050 | 0.0011 | - |
| 0.0871 | 3100 | 0.0013 | - |
| 0.0885 | 3150 | 0.0012 | - |
| 0.0899 | 3200 | 0.0006 | - |
| 0.0913 | 3250 | 0.0012 | - |
| 0.0927 | 3300 | 0.0009 | - |
| 0.0941 | 3350 | 0.0007 | - |
| 0.0955 | 3400 | 0.0006 | - |
| 0.0969 | 3450 | 0.0011 | - |
| 0.0983 | 3500 | 0.0012 | - |
| 0.0997 | 3550 | 0.0008 | - |
| 0.1011 | 3600 | 0.0009 | - |
| 0.1025 | 3650 | 0.0007 | - |
| 0.1039 | 3700 | 0.001 | - |
| 0.1053 | 3750 | 0.0006 | - |
| 0.1068 | 3800 | 0.0008 | - |
| 0.1082 | 3850 | 0.0007 | - |
| 0.1096 | 3900 | 0.0008 | - |
| 0.1110 | 3950 | 0.0006 | - |
| 0.1124 | 4000 | 0.0004 | - |
| 0.1138 | 4050 | 0.001 | - |
| 0.1152 | 4100 | 0.001 | - |
| 0.1166 | 4150 | 0.0007 | - |
| 0.1180 | 4200 | 0.0006 | - |
| 0.1194 | 4250 | 0.0006 | - |
| 0.1208 | 4300 | 0.0004 | - |
| 0.1222 | 4350 | 0.0008 | - |
| 0.1236 | 4400 | 0.0005 | - |
| 0.1250 | 4450 | 0.0007 | - |
| 0.1264 | 4500 | 0.0007 | - |
| 0.1278 | 4550 | 0.001 | - |
| 0.1292 | 4600 | 0.0007 | - |
| 0.1306 | 4650 | 0.0005 | - |
| 0.1320 | 4700 | 0.0006 | - |
| 0.1334 | 4750 | 0.0007 | - |
| 0.1348 | 4800 | 0.0003 | - |
| 0.1363 | 4850 | 0.0009 | - |
| 0.1377 | 4900 | 0.0008 | - |
| 0.1391 | 4950 | 0.0005 | - |
| 0.1405 | 5000 | 0.0005 | - |
| 0.1419 | 5050 | 0.0005 | - |
| 0.1433 | 5100 | 0.0005 | - |
| 0.1447 | 5150 | 0.0004 | - |
| 0.1461 | 5200 | 0.0005 | - |
| 0.1475 | 5250 | 0.0006 | - |
| 0.1489 | 5300 | 0.0007 | - |
| 0.1503 | 5350 | 0.0004 | - |
| 0.1517 | 5400 | 0.0007 | - |
| 0.1531 | 5450 | 0.0006 | - |
| 0.1545 | 5500 | 0.0006 | - |
| 0.1559 | 5550 | 0.0005 | - |
| 0.1573 | 5600 | 0.0005 | - |
| 0.1587 | 5650 | 0.0005 | - |
| 0.1601 | 5700 | 0.0007 | - |
| 0.1615 | 5750 | 0.0007 | - |
| 0.1629 | 5800 | 0.0004 | - |
| 0.1643 | 5850 | 0.0007 | - |
| 0.1657 | 5900 | 0.0006 | - |
| 0.1672 | 5950 | 0.0005 | - |
| 0.1686 | 6000 | 0.0005 | - |
| 0.1700 | 6050 | 0.0004 | - |
| 0.1714 | 6100 | 0.0005 | - |
| 0.1728 | 6150 | 0.0005 | - |
| 0.1742 | 6200 | 0.0004 | - |
| 0.1756 | 6250 | 0.0006 | - |
| 0.1770 | 6300 | 0.0004 | - |
| 0.1784 | 6350 | 0.0004 | - |
| 0.1798 | 6400 | 0.0004 | - |
| 0.1812 | 6450 | 0.0005 | - |
| 0.1826 | 6500 | 0.0005 | - |
| 0.1840 | 6550 | 0.0004 | - |
| 0.1854 | 6600 | 0.0003 | - |
| 0.1868 | 6650 | 0.0004 | - |
| 0.1882 | 6700 | 0.0004 | - |
| 0.1896 | 6750 | 0.0004 | - |
| 0.1910 | 6800 | 0.0006 | - |
| 0.1924 | 6850 | 0.0004 | - |
| 0.1938 | 6900 | 0.0004 | - |
| 0.1952 | 6950 | 0.0003 | - |
| 0.1967 | 7000 | 0.0004 | - |
| 0.1981 | 7050 | 0.0004 | - |
| 0.1995 | 7100 | 0.0003 | - |
| 0.2009 | 7150 | 0.0006 | - |
| 0.2023 | 7200 | 0.0005 | - |
| 0.2037 | 7250 | 0.0005 | - |
| 0.2051 | 7300 | 0.0003 | - |
| 0.2065 | 7350 | 0.0003 | - |
| 0.2079 | 7400 | 0.0004 | - |
| 0.2093 | 7450 | 0.0006 | - |
| 0.2107 | 7500 | 0.0004 | - |
| 0.2121 | 7550 | 0.0003 | - |
| 0.2135 | 7600 | 0.0005 | - |
| 0.2149 | 7650 | 0.0005 | - |
| 0.2163 | 7700 | 0.0005 | - |
| 0.2177 | 7750 | 0.0003 | - |
| 0.2191 | 7800 | 0.0004 | - |
| 0.2205 | 7850 | 0.0003 | - |
| 0.2219 | 7900 | 0.0004 | - |
| 0.2233 | 7950 | 0.0003 | - |
| 0.2247 | 8000 | 0.0003 | - |
| 0.2261 | 8050 | 0.0008 | - |
| 0.2276 | 8100 | 0.0003 | - |
| 0.2290 | 8150 | 0.0003 | - |
| 0.2304 | 8200 | 0.0003 | - |
| 0.2318 | 8250 | 0.0003 | - |
| 0.2332 | 8300 | 0.0004 | - |
| 0.2346 | 8350 | 0.0003 | - |
| 0.2360 | 8400 | 0.0002 | - |
| 0.2374 | 8450 | 0.0005 | - |
| 0.2388 | 8500 | 0.0003 | - |
| 0.2402 | 8550 | 0.0002 | - |
| 0.2416 | 8600 | 0.0005 | - |
| 0.2430 | 8650 | 0.0005 | - |
| 0.2444 | 8700 | 0.0005 | - |
| 0.2458 | 8750 | 0.0002 | - |
| 0.2472 | 8800 | 0.0004 | - |
| 0.2486 | 8850 | 0.0003 | - |
| 0.2500 | 8900 | 0.0002 | - |
| 0.2514 | 8950 | 0.0003 | - |
| 0.2528 | 9000 | 0.0003 | - |
| 0.2542 | 9050 | 0.0002 | - |
| 0.2556 | 9100 | 0.0003 | - |
| 0.2571 | 9150 | 0.0003 | - |
| 0.2585 | 9200 | 0.0005 | - |
| 0.2599 | 9250 | 0.0004 | - |
| 0.2613 | 9300 | 0.0002 | - |
| 0.2627 | 9350 | 0.0002 | - |
| 0.2641 | 9400 | 0.0003 | - |
| 0.2655 | 9450 | 0.0003 | - |
| 0.2669 | 9500 | 0.0003 | - |
| 0.2683 | 9550 | 0.0002 | - |
| 0.2697 | 9600 | 0.0003 | - |
| 0.2711 | 9650 | 0.0003 | - |
| 0.2725 | 9700 | 0.0003 | - |
| 0.2739 | 9750 | 0.0006 | - |
| 0.2753 | 9800 | 0.0003 | - |
| 0.2767 | 9850 | 0.0002 | - |
| 0.2781 | 9900 | 0.0003 | - |
| 0.2795 | 9950 | 0.0004 | - |
| 0.2809 | 10000 | 0.0005 | - |
| 0.2823 | 10050 | 0.0003 | - |
| 0.2837 | 10100 | 0.0003 | - |
| 0.2851 | 10150 | 0.0003 | - |
| 0.2865 | 10200 | 0.0004 | - |
| 0.2880 | 10250 | 0.0004 | - |
| 0.2894 | 10300 | 0.0003 | - |
| 0.2908 | 10350 | 0.0003 | - |
| 0.2922 | 10400 | 0.0003 | - |
| 0.2936 | 10450 | 0.0002 | - |
| 0.2950 | 10500 | 0.0003 | - |
| 0.2964 | 10550 | 0.0002 | - |
| 0.2978 | 10600 | 0.0003 | - |
| 0.2992 | 10650 | 0.0003 | - |
| 0.3006 | 10700 | 0.0003 | - |
| 0.3020 | 10750 | 0.0003 | - |
| 0.3034 | 10800 | 0.0003 | - |
| 0.3048 | 10850 | 0.0004 | - |
| 0.3062 | 10900 | 0.0003 | - |
| 0.3076 | 10950 | 0.0002 | - |
| 0.3090 | 11000 | 0.0003 | - |
| 0.3104 | 11050 | 0.0002 | - |
| 0.3118 | 11100 | 0.0003 | - |
| 0.3132 | 11150 | 0.0002 | - |
| 0.3146 | 11200 | 0.0003 | - |
| 0.3160 | 11250 | 0.0004 | - |
| 0.3175 | 11300 | 0.0003 | - |
| 0.3189 | 11350 | 0.0003 | - |
| 0.3203 | 11400 | 0.0003 | - |
| 0.3217 | 11450 | 0.0001 | - |
| 0.3231 | 11500 | 0.0002 | - |
| 0.3245 | 11550 | 0.0003 | - |
| 0.3259 | 11600 | 0.0003 | - |
| 0.3273 | 11650 | 0.0002 | - |
| 0.3287 | 11700 | 0.0004 | - |
| 0.3301 | 11750 | 0.0003 | - |
| 0.3315 | 11800 | 0.0002 | - |
| 0.3329 | 11850 | 0.0003 | - |
| 0.3343 | 11900 | 0.0003 | - |
| 0.3357 | 11950 | 0.0003 | - |
| 0.3371 | 12000 | 0.0003 | - |
| 0.3385 | 12050 | 0.0002 | - |
| 0.3399 | 12100 | 0.0002 | - |
| 0.3413 | 12150 | 0.0002 | - |
| 0.3427 | 12200 | 0.0002 | - |
| 0.3441 | 12250 | 0.0003 | - |
| 0.3455 | 12300 | 0.0003 | - |
| 0.3469 | 12350 | 0.0003 | - |
| 0.3484 | 12400 | 0.0003 | - |
| 0.3498 | 12450 | 0.0002 | - |
| 0.3512 | 12500 | 0.0003 | - |
| 0.3526 | 12550 | 0.0002 | - |
| 0.3540 | 12600 | 0.0004 | - |
| 0.3554 | 12650 | 0.0003 | - |
| 0.3568 | 12700 | 0.0003 | - |
| 0.3582 | 12750 | 0.0003 | - |
| 0.3596 | 12800 | 0.0002 | - |
| 0.3610 | 12850 | 0.0002 | - |
| 0.3624 | 12900 | 0.0003 | - |
| 0.3638 | 12950 | 0.0002 | - |
| 0.3652 | 13000 | 0.0003 | - |
| 0.3666 | 13050 | 0.0002 | - |
| 0.3680 | 13100 | 0.0003 | - |
| 0.3694 | 13150 | 0.0003 | - |
| 0.3708 | 13200 | 0.0003 | - |
| 0.3722 | 13250 | 0.0002 | - |
| 0.3736 | 13300 | 0.0002 | - |
| 0.3750 | 13350 | 0.0003 | - |
| 0.3764 | 13400 | 0.0002 | - |
| 0.3779 | 13450 | 0.0004 | - |
| 0.3793 | 13500 | 0.0003 | - |
| 0.3807 | 13550 | 0.0002 | - |
| 0.3821 | 13600 | 0.0003 | - |
| 0.3835 | 13650 | 0.0002 | - |
| 0.3849 | 13700 | 0.0003 | - |
| 0.3863 | 13750 | 0.0003 | - |
| 0.3877 | 13800 | 0.0003 | - |
| 0.3891 | 13850 | 0.0002 | - |
| 0.3905 | 13900 | 0.0003 | - |
| 0.3919 | 13950 | 0.0002 | - |
| 0.3933 | 14000 | 0.0003 | - |
| 0.3947 | 14050 | 0.0004 | - |
| 0.3961 | 14100 | 0.0003 | - |
| 0.3975 | 14150 | 0.0003 | - |
| 0.3989 | 14200 | 0.0003 | - |
| 0.4003 | 14250 | 0.0002 | - |
| 0.4017 | 14300 | 0.0003 | - |
| 0.4031 | 14350 | 0.0002 | - |
| 0.4045 | 14400 | 0.0003 | - |
| 0.4059 | 14450 | 0.0002 | - |
| 0.4073 | 14500 | 0.0002 | - |
| 0.4088 | 14550 | 0.0002 | - |
| 0.4102 | 14600 | 0.0002 | - |
| 0.4116 | 14650 | 0.0002 | - |
| 0.4130 | 14700 | 0.0002 | - |
| 0.4144 | 14750 | 0.0004 | - |
| 0.4158 | 14800 | 0.0002 | - |
| 0.4172 | 14850 | 0.0002 | - |
| 0.4186 | 14900 | 0.0002 | - |
| 0.4200 | 14950 | 0.0002 | - |
| 0.4214 | 15000 | 0.0003 | - |
| 0.4228 | 15050 | 0.0002 | - |
| 0.4242 | 15100 | 0.0003 | - |
| 0.4256 | 15150 | 0.0002 | - |
| 0.4270 | 15200 | 0.0003 | - |
| 0.4284 | 15250 | 0.0003 | - |
| 0.4298 | 15300 | 0.0003 | - |
| 0.4312 | 15350 | 0.0013 | - |
| 0.4326 | 15400 | 0.0002 | - |
| 0.4340 | 15450 | 0.0002 | - |
| 0.4354 | 15500 | 0.0003 | - |
| 0.4368 | 15550 | 0.0003 | - |
| 0.4383 | 15600 | 0.0002 | - |
| 0.4397 | 15650 | 0.0002 | - |
| 0.4411 | 15700 | 0.0002 | - |
| 0.4425 | 15750 | 0.0002 | - |
| 0.4439 | 15800 | 0.0003 | - |
| 0.4453 | 15850 | 0.0001 | - |
| 0.4467 | 15900 | 0.0003 | - |
| 0.4481 | 15950 | 0.0002 | - |
| 0.4495 | 16000 | 0.0001 | - |
| 0.4509 | 16050 | 0.0003 | - |
| 0.4523 | 16100 | 0.0003 | - |
| 0.4537 | 16150 | 0.0003 | - |
| 0.4551 | 16200 | 0.0002 | - |
| 0.4565 | 16250 | 0.0001 | - |
| 0.4579 | 16300 | 0.0001 | - |
| 0.4593 | 16350 | 0.0001 | - |
| 0.4607 | 16400 | 0.0003 | - |
| 0.4621 | 16450 | 0.0002 | - |
| 0.4635 | 16500 | 0.0002 | - |
| 0.4649 | 16550 | 0.0002 | - |
| 0.4663 | 16600 | 0.0003 | - |
| 0.4677 | 16650 | 0.0002 | - |
| 0.4692 | 16700 | 0.0003 | - |
| 0.4706 | 16750 | 0.0002 | - |
| 0.4720 | 16800 | 0.0002 | - |
| 0.4734 | 16850 | 0.0002 | - |
| 0.4748 | 16900 | 0.0002 | - |
| 0.4762 | 16950 | 0.0003 | - |
| 0.4776 | 17000 | 0.0002 | - |
| 0.4790 | 17050 | 0.0002 | - |
| 0.4804 | 17100 | 0.0003 | - |
| 0.4818 | 17150 | 0.0001 | - |
| 0.4832 | 17200 | 0.0002 | - |
| 0.4846 | 17250 | 0.0002 | - |
| 0.4860 | 17300 | 0.0002 | - |
| 0.4874 | 17350 | 0.0001 | - |
| 0.4888 | 17400 | 0.0002 | - |
| 0.4902 | 17450 | 0.0002 | - |
| 0.4916 | 17500 | 0.0002 | - |
| 0.4930 | 17550 | 0.0002 | - |
| 0.4944 | 17600 | 0.0002 | - |
| 0.4958 | 17650 | 0.0003 | - |
| 0.4972 | 17700 | 0.0003 | - |
| 0.4987 | 17750 | 0.0002 | - |
| 0.5001 | 17800 | 0.0001 | - |
| 0.5015 | 17850 | 0.0002 | - |
| 0.5029 | 17900 | 0.0003 | - |
| 0.5043 | 17950 | 0.0002 | - |
| 0.5057 | 18000 | 0.0001 | - |
| 0.5071 | 18050 | 0.0003 | - |
| 0.5085 | 18100 | 0.0004 | - |
| 0.5099 | 18150 | 0.0002 | - |
| 0.5113 | 18200 | 0.0002 | - |
| 0.5127 | 18250 | 0.0002 | - |
| 0.5141 | 18300 | 0.0002 | - |
| 0.5155 | 18350 | 0.0002 | - |
| 0.5169 | 18400 | 0.0001 | - |
| 0.5183 | 18450 | 0.0001 | - |
| 0.5197 | 18500 | 0.0002 | - |
| 0.5211 | 18550 | 0.0002 | - |
| 0.5225 | 18600 | 0.0618 | - |
| 0.5239 | 18650 | 0.0003 | - |
| 0.5253 | 18700 | 0.0003 | - |
| 0.5267 | 18750 | 0.0002 | - |
| 0.5281 | 18800 | 0.0002 | - |
| 0.5296 | 18850 | 0.0002 | - |
| 0.5310 | 18900 | 0.0001 | - |
| 0.5324 | 18950 | 0.0002 | - |
| 0.5338 | 19000 | 0.0002 | - |
| 0.5352 | 19050 | 0.0003 | - |
| 0.5366 | 19100 | 0.0002 | - |
| 0.5380 | 19150 | 0.0002 | - |
| 0.5394 | 19200 | 0.0001 | - |
| 0.5408 | 19250 | 0.0003 | - |
| 0.5422 | 19300 | 0.0003 | - |
| 0.5436 | 19350 | 0.0002 | - |
| 0.5450 | 19400 | 0.0002 | - |
| 0.5464 | 19450 | 0.0002 | - |
| 0.5478 | 19500 | 0.0002 | - |
| 0.5492 | 19550 | 0.0002 | - |
| 0.5506 | 19600 | 0.0001 | - |
| 0.5520 | 19650 | 0.0002 | - |
| 0.5534 | 19700 | 0.0003 | - |
| 0.5548 | 19750 | 0.0002 | - |
| 0.5562 | 19800 | 0.0003 | - |
| 0.5576 | 19850 | 0.0002 | - |
| 0.5591 | 19900 | 0.0001 | - |
| 0.5605 | 19950 | 0.0001 | - |
| 0.5619 | 20000 | 0.0001 | - |
| 0.5633 | 20050 | 0.0002 | - |
| 0.5647 | 20100 | 0.0002 | - |
| 0.5661 | 20150 | 0.0002 | - |
| 0.5675 | 20200 | 0.0002 | - |
| 0.5689 | 20250 | 0.0002 | - |
| 0.5703 | 20300 | 0.0002 | - |
| 0.5717 | 20350 | 0.0001 | - |
| 0.5731 | 20400 | 0.0001 | - |
| 0.5745 | 20450 | 0.0002 | - |
| 0.5759 | 20500 | 0.0002 | - |
| 0.5773 | 20550 | 0.0001 | - |
| 0.5787 | 20600 | 0.0001 | - |
| 0.5801 | 20650 | 0.0002 | - |
| 0.5815 | 20700 | 0.0001 | - |
| 0.5829 | 20750 | 0.0002 | - |
| 0.5843 | 20800 | 0.0001 | - |
| 0.5857 | 20850 | 0.0002 | - |
| 0.5871 | 20900 | 0.0002 | - |
| 0.5885 | 20950 | 0.0001 | - |
| 0.5900 | 21000 | 0.0001 | - |
| 0.5914 | 21050 | 0.0001 | - |
| 0.5928 | 21100 | 0.0002 | - |
| 0.5942 | 21150 | 0.0002 | - |
| 0.5956 | 21200 | 0.0001 | - |
| 0.5970 | 21250 | 0.0002 | - |
| 0.5984 | 21300 | 0.0001 | - |
| 0.5998 | 21350 | 0.0002 | - |
| 0.6012 | 21400 | 0.0002 | - |
| 0.6026 | 21450 | 0.0002 | - |
| 0.6040 | 21500 | 0.0003 | - |
| 0.6054 | 21550 | 0.0002 | - |
| 0.6068 | 21600 | 0.0002 | - |
| 0.6082 | 21650 | 0.0003 | - |
| 0.6096 | 21700 | 0.0002 | - |
| 0.6110 | 21750 | 0.0001 | - |
| 0.6124 | 21800 | 0.0003 | - |
| 0.6138 | 21850 | 0.0001 | - |
| 0.6152 | 21900 | 0.0002 | - |
| 0.6166 | 21950 | 0.0001 | - |
| 0.6180 | 22000 | 0.0002 | - |
| 0.6195 | 22050 | 0.0002 | - |
| 0.6209 | 22100 | 0.0001 | - |
| 0.6223 | 22150 | 0.0002 | - |
| 0.6237 | 22200 | 0.0001 | - |
| 0.6251 | 22250 | 0.0002 | - |
| 0.6265 | 22300 | 0.0002 | - |
| 0.6279 | 22350 | 0.0001 | - |
| 0.6293 | 22400 | 0.0002 | - |
| 0.6307 | 22450 | 0.0003 | - |
| 0.6321 | 22500 | 0.0001 | - |
| 0.6335 | 22550 | 0.0002 | - |
| 0.6349 | 22600 | 0.0001 | - |
| 0.6363 | 22650 | 0.0002 | - |
| 0.6377 | 22700 | 0.0002 | - |
| 0.6391 | 22750 | 0.0001 | - |
| 0.6405 | 22800 | 0.0002 | - |
| 0.6419 | 22850 | 0.0002 | - |
| 0.6433 | 22900 | 0.0002 | - |
| 0.6447 | 22950 | 0.0002 | - |
| 0.6461 | 23000 | 0.0003 | - |
| 0.6475 | 23050 | 0.0002 | - |
| 0.6489 | 23100 | 0.0001 | - |
| 0.6504 | 23150 | 0.0002 | - |
| 0.6518 | 23200 | 0.0001 | - |
| 0.6532 | 23250 | 0.0002 | - |
| 0.6546 | 23300 | 0.0001 | - |
| 0.6560 | 23350 | 0.0002 | - |
| 0.6574 | 23400 | 0.0003 | - |
| 0.6588 | 23450 | 0.0002 | - |
| 0.6602 | 23500 | 0.0002 | - |
| 0.6616 | 23550 | 0.0001 | - |
| 0.6630 | 23600 | 0.0003 | - |
| 0.6644 | 23650 | 0.0002 | - |
| 0.6658 | 23700 | 0.0001 | - |
| 0.6672 | 23750 | 0.0002 | - |
| 0.6686 | 23800 | 0.0001 | - |
| 0.6700 | 23850 | 0.0001 | - |
| 0.6714 | 23900 | 0.0002 | - |
| 0.6728 | 23950 | 0.0002 | - |
| 0.6742 | 24000 | 0.0002 | - |
| 0.6756 | 24050 | 0.0002 | - |
| 0.6770 | 24100 | 0.0001 | - |
| 0.6784 | 24150 | 0.0002 | - |
| 0.6799 | 24200 | 0.0002 | - |
| 0.6813 | 24250 | 0.0002 | - |
| 0.6827 | 24300 | 0.0001 | - |
| 0.6841 | 24350 | 0.0002 | - |
| 0.6855 | 24400 | 0.0002 | - |
| 0.6869 | 24450 | 0.0001 | - |
| 0.6883 | 24500 | 0.0001 | - |
| 0.6897 | 24550 | 0.0002 | - |
| 0.6911 | 24600 | 0.0001 | - |
| 0.6925 | 24650 | 0.0002 | - |
| 0.6939 | 24700 | 0.0001 | - |
| 0.6953 | 24750 | 0.0003 | - |
| 0.6967 | 24800 | 0.0001 | - |
| 0.6981 | 24850 | 0.0002 | - |
| 0.6995 | 24900 | 0.0001 | - |
| 0.7009 | 24950 | 0.0001 | - |
| 0.7023 | 25000 | 0.0002 | - |
| 0.7037 | 25050 | 0.0001 | - |
| 0.7051 | 25100 | 0.0002 | - |
| 0.7065 | 25150 | 0.0001 | - |
| 0.7079 | 25200 | 0.0002 | - |
| 0.7093 | 25250 | 0.0002 | - |
| 0.7108 | 25300 | 0.0001 | - |
| 0.7122 | 25350 | 0.0002 | - |
| 0.7136 | 25400 | 0.0001 | - |
| 0.7150 | 25450 | 0.0001 | - |
| 0.7164 | 25500 | 0.0001 | - |
| 0.7178 | 25550 | 0.0001 | - |
| 0.7192 | 25600 | 0.0002 | - |
| 0.7206 | 25650 | 0.0002 | - |
| 0.7220 | 25700 | 0.0001 | - |
| 0.7234 | 25750 | 0.0001 | - |
| 0.7248 | 25800 | 0.0001 | - |
| 0.7262 | 25850 | 0.0002 | - |
| 0.7276 | 25900 | 0.0002 | - |
| 0.7290 | 25950 | 0.0001 | - |
| 0.7304 | 26000 | 0.0001 | - |
| 0.7318 | 26050 | 0.0002 | - |
| 0.7332 | 26100 | 0.0001 | - |
| 0.7346 | 26150 | 0.0001 | - |
| 0.7360 | 26200 | 0.0001 | - |
| 0.7374 | 26250 | 0.0001 | - |
| 0.7388 | 26300 | 0.0001 | - |
| 0.7403 | 26350 | 0.0002 | - |
| 0.7417 | 26400 | 0.0002 | - |
| 0.7431 | 26450 | 0.0001 | - |
| 0.7445 | 26500 | 0.0002 | - |
| 0.7459 | 26550 | 0.0001 | - |
| 0.7473 | 26600 | 0.0001 | - |
| 0.7487 | 26650 | 0.0002 | - |
| 0.7501 | 26700 | 0.0001 | - |
| 0.7515 | 26750 | 0.0001 | - |
| 0.7529 | 26800 | 0.0001 | - |
| 0.7543 | 26850 | 0.0001 | - |
| 0.7557 | 26900 | 0.0001 | - |
| 0.7571 | 26950 | 0.0001 | - |
| 0.7585 | 27000 | 0.0002 | - |
| 0.7599 | 27050 | 0.0001 | - |
| 0.7613 | 27100 | 0.0002 | - |
| 0.7627 | 27150 | 0.0002 | - |
| 0.7641 | 27200 | 0.0001 | - |
| 0.7655 | 27250 | 0.0002 | - |
| 0.7669 | 27300 | 0.0001 | - |
| 0.7683 | 27350 | 0.0002 | - |
| 0.7697 | 27400 | 0.0001 | - |
| 0.7712 | 27450 | 0.0002 | - |
| 0.7726 | 27500 | 0.0001 | - |
| 0.7740 | 27550 | 0.0001 | - |
| 0.7754 | 27600 | 0.0001 | - |
| 0.7768 | 27650 | 0.0001 | - |
| 0.7782 | 27700 | 0.0001 | - |
| 0.7796 | 27750 | 0.0001 | - |
| 0.7810 | 27800 | 0.0001 | - |
| 0.7824 | 27850 | 0.0001 | - |
| 0.7838 | 27900 | 0.0001 | - |
| 0.7852 | 27950 | 0.0001 | - |
| 0.7866 | 28000 | 0.0001 | - |
| 0.7880 | 28050 | 0.0001 | - |
| 0.7894 | 28100 | 0.0001 | - |
| 0.7908 | 28150 | 0.0001 | - |
| 0.7922 | 28200 | 0.0001 | - |
| 0.7936 | 28250 | 0.0002 | - |
| 0.7950 | 28300 | 0.0002 | - |
| 0.7964 | 28350 | 0.0001 | - |
| 0.7978 | 28400 | 0.0002 | - |
| 0.7992 | 28450 | 0.0001 | - |
| 0.8007 | 28500 | 0.0001 | - |
| 0.8021 | 28550 | 0.0001 | - |
| 0.8035 | 28600 | 0.0001 | - |
| 0.8049 | 28650 | 0.0002 | - |
| 0.8063 | 28700 | 0.0001 | - |
| 0.8077 | 28750 | 0.0002 | - |
| 0.8091 | 28800 | 0.0001 | - |
| 0.8105 | 28850 | 0.0001 | - |
| 0.8119 | 28900 | 0.0001 | - |
| 0.8133 | 28950 | 0.0002 | - |
| 0.8147 | 29000 | 0.0001 | - |
| 0.8161 | 29050 | 0.0002 | - |
| 0.8175 | 29100 | 0.0002 | - |
| 0.8189 | 29150 | 0.0002 | - |
| 0.8203 | 29200 | 0.0001 | - |
| 0.8217 | 29250 | 0.0002 | - |
| 0.8231 | 29300 | 0.0001 | - |
| 0.8245 | 29350 | 0.0001 | - |
| 0.8259 | 29400 | 0.0001 | - |
| 0.8273 | 29450 | 0.0002 | - |
| 0.8287 | 29500 | 0.0001 | - |
| 0.8301 | 29550 | 0.0002 | - |
| 0.8316 | 29600 | 0.0001 | - |
| 0.8330 | 29650 | 0.0001 | - |
| 0.8344 | 29700 | 0.0001 | - |
| 0.8358 | 29750 | 0.0001 | - |
| 0.8372 | 29800 | 0.0001 | - |
| 0.8386 | 29850 | 0.0001 | - |
| 0.8400 | 29900 | 0.0001 | - |
| 0.8414 | 29950 | 0.0002 | - |
| 0.8428 | 30000 | 0.0002 | - |
| 0.8442 | 30050 | 0.0001 | - |
| 0.8456 | 30100 | 0.0001 | - |
| 0.8470 | 30150 | 0.0001 | - |
| 0.8484 | 30200 | 0.0001 | - |
| 0.8498 | 30250 | 0.0001 | - |
| 0.8512 | 30300 | 0.0001 | - |
| 0.8526 | 30350 | 0.0001 | - |
| 0.8540 | 30400 | 0.0001 | - |
| 0.8554 | 30450 | 0.0002 | - |
| 0.8568 | 30500 | 0.0001 | - |
| 0.8582 | 30550 | 0.0001 | - |
| 0.8596 | 30600 | 0.0 | - |
| 0.8611 | 30650 | 0.0001 | - |
| 0.8625 | 30700 | 0.0002 | - |
| 0.8639 | 30750 | 0.0002 | - |
| 0.8653 | 30800 | 0.0002 | - |
| 0.8667 | 30850 | 0.0001 | - |
| 0.8681 | 30900 | 0.0002 | - |
| 0.8695 | 30950 | 0.0001 | - |
| 0.8709 | 31000 | 0.0001 | - |
| 0.8723 | 31050 | 0.0001 | - |
| 0.8737 | 31100 | 0.0002 | - |
| 0.8751 | 31150 | 0.0002 | - |
| 0.8765 | 31200 | 0.0001 | - |
| 0.8779 | 31250 | 0.0001 | - |
| 0.8793 | 31300 | 0.0001 | - |
| 0.8807 | 31350 | 0.0001 | - |
| 0.8821 | 31400 | 0.0001 | - |
| 0.8835 | 31450 | 0.0001 | - |
| 0.8849 | 31500 | 0.0001 | - |
| 0.8863 | 31550 | 0.0002 | - |
| 0.8877 | 31600 | 0.0001 | - |
| 0.8891 | 31650 | 0.0001 | - |
| 0.8905 | 31700 | 0.0002 | - |
| 0.8920 | 31750 | 0.0001 | - |
| 0.8934 | 31800 | 0.0001 | - |
| 0.8948 | 31850 | 0.0001 | - |
| 0.8962 | 31900 | 0.0003 | - |
| 0.8976 | 31950 | 0.0002 | - |
| 0.8990 | 32000 | 0.0002 | - |
| 0.9004 | 32050 | 0.0001 | - |
| 0.9018 | 32100 | 0.0001 | - |
| 0.9032 | 32150 | 0.0002 | - |
| 0.9046 | 32200 | 0.0003 | - |
| 0.9060 | 32250 | 0.0001 | - |
| 0.9074 | 32300 | 0.0002 | - |
| 0.9088 | 32350 | 0.0001 | - |
| 0.9102 | 32400 | 0.0002 | - |
| 0.9116 | 32450 | 0.0002 | - |
| 0.9130 | 32500 | 0.0001 | - |
| 0.9144 | 32550 | 0.0001 | - |
| 0.9158 | 32600 | 0.0001 | - |
| 0.9172 | 32650 | 0.0001 | - |
| 0.9186 | 32700 | 0.0001 | - |
| 0.9200 | 32750 | 0.0001 | - |
| 0.9215 | 32800 | 0.0001 | - |
| 0.9229 | 32850 | 0.0001 | - |
| 0.9243 | 32900 | 0.0001 | - |
| 0.9257 | 32950 | 0.0001 | - |
| 0.9271 | 33000 | 0.0001 | - |
| 0.9285 | 33050 | 0.0002 | - |
| 0.9299 | 33100 | 0.0001 | - |
| 0.9313 | 33150 | 0.0002 | - |
| 0.9327 | 33200 | 0.0001 | - |
| 0.9341 | 33250 | 0.0001 | - |
| 0.9355 | 33300 | 0.0002 | - |
| 0.9369 | 33350 | 0.0001 | - |
| 0.9383 | 33400 | 0.0001 | - |
| 0.9397 | 33450 | 0.0001 | - |
| 0.9411 | 33500 | 0.0001 | - |
| 0.9425 | 33550 | 0.0001 | - |
| 0.9439 | 33600 | 0.0001 | - |
| 0.9453 | 33650 | 0.0001 | - |
| 0.9467 | 33700 | 0.0002 | - |
| 0.9481 | 33750 | 0.0001 | - |
| 0.9495 | 33800 | 0.0001 | - |
| 0.9509 | 33850 | 0.0002 | - |
| 0.9524 | 33900 | 0.0001 | - |
| 0.9538 | 33950 | 0.0001 | - |
| 0.9552 | 34000 | 0.0002 | - |
| 0.9566 | 34050 | 0.0001 | - |
| 0.9580 | 34100 | 0.0001 | - |
| 0.9594 | 34150 | 0.0001 | - |
| 0.9608 | 34200 | 0.0002 | - |
| 0.9622 | 34250 | 0.0001 | - |
| 0.9636 | 34300 | 0.0001 | - |
| 0.9650 | 34350 | 0.0001 | - |
| 0.9664 | 34400 | 0.0001 | - |
| 0.9678 | 34450 | 0.0003 | - |
| 0.9692 | 34500 | 0.0001 | - |
| 0.9706 | 34550 | 0.0001 | - |
| 0.9720 | 34600 | 0.0001 | - |
| 0.9734 | 34650 | 0.0001 | - |
| 0.9748 | 34700 | 0.0001 | - |
| 0.9762 | 34750 | 0.0001 | - |
| 0.9776 | 34800 | 0.0002 | - |
| 0.9790 | 34850 | 0.0001 | - |
| 0.9804 | 34900 | 0.0002 | - |
| 0.9819 | 34950 | 0.0001 | - |
| 0.9833 | 35000 | 0.0002 | - |
| 0.9847 | 35050 | 0.0001 | - |
| 0.9861 | 35100 | 0.0001 | - |
| 0.9875 | 35150 | 0.0001 | - |
| 0.9889 | 35200 | 0.0001 | - |
| 0.9903 | 35250 | 0.0001 | - |
| 0.9917 | 35300 | 0.0001 | - |
| 0.9931 | 35350 | 0.0001 | - |
| 0.9945 | 35400 | 0.0001 | - |
| 0.9959 | 35450 | 0.0001 | - |
| 0.9973 | 35500 | 0.0001 | - |
| 0.9987 | 35550 | 0.0001 | - |
| **1.0** | **35596** | **-** | **0.0121** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.9
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.42.4
- PyTorch: 2.4.0+cu121
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Lookup | <ul><li>"Show me the products with 'Tablet' in the name and filter by price above 200."</li><li>'Can you get me the products with a price above 100?'</li><li>'Filter by employees with a salary above 60,000 and show me their first names.'</li></ul> |
| Aggregation | <ul><li>'What’s the total revenue generated by each employee in 2023?'</li><li>'Get me data_asset_001_pcc group by category.'</li><li>'Show me max revenue'</li></ul> |
| Tablejoin | <ul><li>'Show me a merge of key performance metrics and cash flow.'</li><li>'How can I integrate the Customers and Orders tables to identify customers with multiple recent orders?'</li><li>'Can you integrate data from the Products and Orders tables to determine the revenue generated by each product?'</li></ul> |
| Viewtables | <ul><li>'How can I view all of the tables stored within the starhub_data_asset database?'</li><li>'What are the tables that I can access in the starhub_data_asset database?'</li><li>'What are the available tables that are relevant to pricing strategies within starhub_data_asset database?'</li></ul> |
| Lookup_1 | <ul><li>'Display data_asset_kpi_cf.'</li><li>'Get me data_asset_001_ta trend history.'</li><li>'Show me data_asset_kpi_cf details.'</li></ul> |
| Rejection | <ul><li>"I don't want to apply any filters now."</li><li>"I don't want to apply any filters to this."</li><li>"I'd prefer not to apply any filters."</li></ul> |
| Generalreply | <ul><li>"What's your favorite TV show of all time?"</li><li>"i'll start dinner at 6:00."</li><li>"Oh, that's a tough one! There are so many good memories to choose from. But if I had to pick just one, I think it would be spending summers at my grandparent's house. We would play board games, make homemade ice cream, and have big family dinners. It was always so much fun!"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9915 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("nazhan/bge-small-en-v1.5-brahmaputra-iter-10-2nd")
# Run inference
preds = model("Can I have data_asset_kpi_bs details.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 8.8375 | 62 |
| Label | Training Sample Count |
|:-------------|:----------------------|
| Tablejoin | 122 |
| Rejection | 69 |
| Aggregation | 287 |
| Lookup | 59 |
| Generalreply | 71 |
| Viewtables | 79 |
| Lookup_1 | 156 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.2355 | - |
| 0.0014 | 50 | 0.2202 | - |
| 0.0028 | 100 | 0.1664 | - |
| 0.0042 | 150 | 0.216 | - |
| 0.0056 | 200 | 0.2341 | - |
| 0.0070 | 250 | 0.2279 | - |
| 0.0084 | 300 | 0.1786 | - |
| 0.0098 | 350 | 0.1603 | - |
| 0.0112 | 400 | 0.0821 | - |
| 0.0126 | 450 | 0.1498 | - |
| 0.0140 | 500 | 0.0942 | - |
| 0.0155 | 550 | 0.0999 | - |
| 0.0169 | 600 | 0.0895 | - |
| 0.0183 | 650 | 0.0841 | - |
| 0.0197 | 700 | 0.1433 | - |
| 0.0211 | 750 | 0.0808 | - |
| 0.0225 | 800 | 0.0346 | - |
| 0.0239 | 850 | 0.0556 | - |
| 0.0253 | 900 | 0.0755 | - |
| 0.0267 | 950 | 0.0346 | - |
| 0.0281 | 1000 | 0.0486 | - |
| 0.0295 | 1050 | 0.0207 | - |
| 0.0309 | 1100 | 0.0126 | - |
| 0.0323 | 1150 | 0.0113 | - |
| 0.0337 | 1200 | 0.0076 | - |
| 0.0351 | 1250 | 0.0082 | - |
| 0.0365 | 1300 | 0.0142 | - |
| 0.0379 | 1350 | 0.011 | - |
| 0.0393 | 1400 | 0.0034 | - |
| 0.0407 | 1450 | 0.0123 | - |
| 0.0421 | 1500 | 0.0062 | - |
| 0.0435 | 1550 | 0.0021 | - |
| 0.0449 | 1600 | 0.005 | - |
| 0.0464 | 1650 | 0.0124 | - |
| 0.0478 | 1700 | 0.0026 | - |
| 0.0492 | 1750 | 0.0029 | - |
| 0.0506 | 1800 | 0.0023 | - |
| 0.0520 | 1850 | 0.0017 | - |
| 0.0534 | 1900 | 0.0027 | - |
| 0.0548 | 1950 | 0.0017 | - |
| 0.0562 | 2000 | 0.0043 | - |
| 0.0576 | 2050 | 0.0018 | - |
| 0.0590 | 2100 | 0.0032 | - |
| 0.0604 | 2150 | 0.0022 | - |
| 0.0618 | 2200 | 0.0052 | - |
| 0.0632 | 2250 | 0.0025 | - |
| 0.0646 | 2300 | 0.0018 | - |
| 0.0660 | 2350 | 0.0016 | - |
| 0.0674 | 2400 | 0.0016 | - |
| 0.0688 | 2450 | 0.001 | - |
| 0.0702 | 2500 | 0.0015 | - |
| 0.0716 | 2550 | 0.0013 | - |
| 0.0730 | 2600 | 0.0012 | - |
| 0.0744 | 2650 | 0.0012 | - |
| 0.0759 | 2700 | 0.0017 | - |
| 0.0773 | 2750 | 0.0016 | - |
| 0.0787 | 2800 | 0.0018 | - |
| 0.0801 | 2850 | 0.0007 | - |
| 0.0815 | 2900 | 0.0008 | - |
| 0.0829 | 2950 | 0.0016 | - |
| 0.0843 | 3000 | 0.0008 | - |
| 0.0857 | 3050 | 0.0011 | - |
| 0.0871 | 3100 | 0.0013 | - |
| 0.0885 | 3150 | 0.0012 | - |
| 0.0899 | 3200 | 0.0006 | - |
| 0.0913 | 3250 | 0.0012 | - |
| 0.0927 | 3300 | 0.0009 | - |
| 0.0941 | 3350 | 0.0007 | - |
| 0.0955 | 3400 | 0.0006 | - |
| 0.0969 | 3450 | 0.0011 | - |
| 0.0983 | 3500 | 0.0012 | - |
| 0.0997 | 3550 | 0.0008 | - |
| 0.1011 | 3600 | 0.0009 | - |
| 0.1025 | 3650 | 0.0007 | - |
| 0.1039 | 3700 | 0.001 | - |
| 0.1053 | 3750 | 0.0006 | - |
| 0.1068 | 3800 | 0.0008 | - |
| 0.1082 | 3850 | 0.0007 | - |
| 0.1096 | 3900 | 0.0008 | - |
| 0.1110 | 3950 | 0.0006 | - |
| 0.1124 | 4000 | 0.0004 | - |
| 0.1138 | 4050 | 0.001 | - |
| 0.1152 | 4100 | 0.001 | - |
| 0.1166 | 4150 | 0.0007 | - |
| 0.1180 | 4200 | 0.0006 | - |
| 0.1194 | 4250 | 0.0006 | - |
| 0.1208 | 4300 | 0.0004 | - |
| 0.1222 | 4350 | 0.0008 | - |
| 0.1236 | 4400 | 0.0005 | - |
| 0.1250 | 4450 | 0.0007 | - |
| 0.1264 | 4500 | 0.0007 | - |
| 0.1278 | 4550 | 0.001 | - |
| 0.1292 | 4600 | 0.0007 | - |
| 0.1306 | 4650 | 0.0005 | - |
| 0.1320 | 4700 | 0.0006 | - |
| 0.1334 | 4750 | 0.0007 | - |
| 0.1348 | 4800 | 0.0003 | - |
| 0.1363 | 4850 | 0.0009 | - |
| 0.1377 | 4900 | 0.0008 | - |
| 0.1391 | 4950 | 0.0005 | - |
| 0.1405 | 5000 | 0.0005 | - |
| 0.1419 | 5050 | 0.0005 | - |
| 0.1433 | 5100 | 0.0005 | - |
| 0.1447 | 5150 | 0.0004 | - |
| 0.1461 | 5200 | 0.0005 | - |
| 0.1475 | 5250 | 0.0006 | - |
| 0.1489 | 5300 | 0.0007 | - |
| 0.1503 | 5350 | 0.0004 | - |
| 0.1517 | 5400 | 0.0007 | - |
| 0.1531 | 5450 | 0.0006 | - |
| 0.1545 | 5500 | 0.0006 | - |
| 0.1559 | 5550 | 0.0005 | - |
| 0.1573 | 5600 | 0.0005 | - |
| 0.1587 | 5650 | 0.0005 | - |
| 0.1601 | 5700 | 0.0007 | - |
| 0.1615 | 5750 | 0.0007 | - |
| 0.1629 | 5800 | 0.0004 | - |
| 0.1643 | 5850 | 0.0007 | - |
| 0.1657 | 5900 | 0.0006 | - |
| 0.1672 | 5950 | 0.0005 | - |
| 0.1686 | 6000 | 0.0005 | - |
| 0.1700 | 6050 | 0.0004 | - |
| 0.1714 | 6100 | 0.0005 | - |
| 0.1728 | 6150 | 0.0005 | - |
| 0.1742 | 6200 | 0.0004 | - |
| 0.1756 | 6250 | 0.0006 | - |
| 0.1770 | 6300 | 0.0004 | - |
| 0.1784 | 6350 | 0.0004 | - |
| 0.1798 | 6400 | 0.0004 | - |
| 0.1812 | 6450 | 0.0005 | - |
| 0.1826 | 6500 | 0.0005 | - |
| 0.1840 | 6550 | 0.0004 | - |
| 0.1854 | 6600 | 0.0003 | - |
| 0.1868 | 6650 | 0.0004 | - |
| 0.1882 | 6700 | 0.0004 | - |
| 0.1896 | 6750 | 0.0004 | - |
| 0.1910 | 6800 | 0.0006 | - |
| 0.1924 | 6850 | 0.0004 | - |
| 0.1938 | 6900 | 0.0004 | - |
| 0.1952 | 6950 | 0.0003 | - |
| 0.1967 | 7000 | 0.0004 | - |
| 0.1981 | 7050 | 0.0004 | - |
| 0.1995 | 7100 | 0.0003 | - |
| 0.2009 | 7150 | 0.0006 | - |
| 0.2023 | 7200 | 0.0005 | - |
| 0.2037 | 7250 | 0.0005 | - |
| 0.2051 | 7300 | 0.0003 | - |
| 0.2065 | 7350 | 0.0003 | - |
| 0.2079 | 7400 | 0.0004 | - |
| 0.2093 | 7450 | 0.0006 | - |
| 0.2107 | 7500 | 0.0004 | - |
| 0.2121 | 7550 | 0.0003 | - |
| 0.2135 | 7600 | 0.0005 | - |
| 0.2149 | 7650 | 0.0005 | - |
| 0.2163 | 7700 | 0.0005 | - |
| 0.2177 | 7750 | 0.0003 | - |
| 0.2191 | 7800 | 0.0004 | - |
| 0.2205 | 7850 | 0.0003 | - |
| 0.2219 | 7900 | 0.0004 | - |
| 0.2233 | 7950 | 0.0003 | - |
| 0.2247 | 8000 | 0.0003 | - |
| 0.2261 | 8050 | 0.0008 | - |
| 0.2276 | 8100 | 0.0003 | - |
| 0.2290 | 8150 | 0.0003 | - |
| 0.2304 | 8200 | 0.0003 | - |
| 0.2318 | 8250 | 0.0003 | - |
| 0.2332 | 8300 | 0.0004 | - |
| 0.2346 | 8350 | 0.0003 | - |
| 0.2360 | 8400 | 0.0002 | - |
| 0.2374 | 8450 | 0.0005 | - |
| 0.2388 | 8500 | 0.0003 | - |
| 0.2402 | 8550 | 0.0002 | - |
| 0.2416 | 8600 | 0.0005 | - |
| 0.2430 | 8650 | 0.0005 | - |
| 0.2444 | 8700 | 0.0005 | - |
| 0.2458 | 8750 | 0.0002 | - |
| 0.2472 | 8800 | 0.0004 | - |
| 0.2486 | 8850 | 0.0003 | - |
| 0.2500 | 8900 | 0.0002 | - |
| 0.2514 | 8950 | 0.0003 | - |
| 0.2528 | 9000 | 0.0003 | - |
| 0.2542 | 9050 | 0.0002 | - |
| 0.2556 | 9100 | 0.0003 | - |
| 0.2571 | 9150 | 0.0003 | - |
| 0.2585 | 9200 | 0.0005 | - |
| 0.2599 | 9250 | 0.0004 | - |
| 0.2613 | 9300 | 0.0002 | - |
| 0.2627 | 9350 | 0.0002 | - |
| 0.2641 | 9400 | 0.0003 | - |
| 0.2655 | 9450 | 0.0003 | - |
| 0.2669 | 9500 | 0.0003 | - |
| 0.2683 | 9550 | 0.0002 | - |
| 0.2697 | 9600 | 0.0003 | - |
| 0.2711 | 9650 | 0.0003 | - |
| 0.2725 | 9700 | 0.0003 | - |
| 0.2739 | 9750 | 0.0006 | - |
| 0.2753 | 9800 | 0.0003 | - |
| 0.2767 | 9850 | 0.0002 | - |
| 0.2781 | 9900 | 0.0003 | - |
| 0.2795 | 9950 | 0.0004 | - |
| 0.2809 | 10000 | 0.0005 | - |
| 0.2823 | 10050 | 0.0003 | - |
| 0.2837 | 10100 | 0.0003 | - |
| 0.2851 | 10150 | 0.0003 | - |
| 0.2865 | 10200 | 0.0004 | - |
| 0.2880 | 10250 | 0.0004 | - |
| 0.2894 | 10300 | 0.0003 | - |
| 0.2908 | 10350 | 0.0003 | - |
| 0.2922 | 10400 | 0.0003 | - |
| 0.2936 | 10450 | 0.0002 | - |
| 0.2950 | 10500 | 0.0003 | - |
| 0.2964 | 10550 | 0.0002 | - |
| 0.2978 | 10600 | 0.0003 | - |
| 0.2992 | 10650 | 0.0003 | - |
| 0.3006 | 10700 | 0.0003 | - |
| 0.3020 | 10750 | 0.0003 | - |
| 0.3034 | 10800 | 0.0003 | - |
| 0.3048 | 10850 | 0.0004 | - |
| 0.3062 | 10900 | 0.0003 | - |
| 0.3076 | 10950 | 0.0002 | - |
| 0.3090 | 11000 | 0.0003 | - |
| 0.3104 | 11050 | 0.0002 | - |
| 0.3118 | 11100 | 0.0003 | - |
| 0.3132 | 11150 | 0.0002 | - |
| 0.3146 | 11200 | 0.0003 | - |
| 0.3160 | 11250 | 0.0004 | - |
| 0.3175 | 11300 | 0.0003 | - |
| 0.3189 | 11350 | 0.0003 | - |
| 0.3203 | 11400 | 0.0003 | - |
| 0.3217 | 11450 | 0.0001 | - |
| 0.3231 | 11500 | 0.0002 | - |
| 0.3245 | 11550 | 0.0003 | - |
| 0.3259 | 11600 | 0.0003 | - |
| 0.3273 | 11650 | 0.0002 | - |
| 0.3287 | 11700 | 0.0004 | - |
| 0.3301 | 11750 | 0.0003 | - |
| 0.3315 | 11800 | 0.0002 | - |
| 0.3329 | 11850 | 0.0003 | - |
| 0.3343 | 11900 | 0.0003 | - |
| 0.3357 | 11950 | 0.0003 | - |
| 0.3371 | 12000 | 0.0003 | - |
| 0.3385 | 12050 | 0.0002 | - |
| 0.3399 | 12100 | 0.0002 | - |
| 0.3413 | 12150 | 0.0002 | - |
| 0.3427 | 12200 | 0.0002 | - |
| 0.3441 | 12250 | 0.0003 | - |
| 0.3455 | 12300 | 0.0003 | - |
| 0.3469 | 12350 | 0.0003 | - |
| 0.3484 | 12400 | 0.0003 | - |
| 0.3498 | 12450 | 0.0002 | - |
| 0.3512 | 12500 | 0.0003 | - |
| 0.3526 | 12550 | 0.0002 | - |
| 0.3540 | 12600 | 0.0004 | - |
| 0.3554 | 12650 | 0.0003 | - |
| 0.3568 | 12700 | 0.0003 | - |
| 0.3582 | 12750 | 0.0003 | - |
| 0.3596 | 12800 | 0.0002 | - |
| 0.3610 | 12850 | 0.0002 | - |
| 0.3624 | 12900 | 0.0003 | - |
| 0.3638 | 12950 | 0.0002 | - |
| 0.3652 | 13000 | 0.0003 | - |
| 0.3666 | 13050 | 0.0002 | - |
| 0.3680 | 13100 | 0.0003 | - |
| 0.3694 | 13150 | 0.0003 | - |
| 0.3708 | 13200 | 0.0003 | - |
| 0.3722 | 13250 | 0.0002 | - |
| 0.3736 | 13300 | 0.0002 | - |
| 0.3750 | 13350 | 0.0003 | - |
| 0.3764 | 13400 | 0.0002 | - |
| 0.3779 | 13450 | 0.0004 | - |
| 0.3793 | 13500 | 0.0003 | - |
| 0.3807 | 13550 | 0.0002 | - |
| 0.3821 | 13600 | 0.0003 | - |
| 0.3835 | 13650 | 0.0002 | - |
| 0.3849 | 13700 | 0.0003 | - |
| 0.3863 | 13750 | 0.0003 | - |
| 0.3877 | 13800 | 0.0003 | - |
| 0.3891 | 13850 | 0.0002 | - |
| 0.3905 | 13900 | 0.0003 | - |
| 0.3919 | 13950 | 0.0002 | - |
| 0.3933 | 14000 | 0.0003 | - |
| 0.3947 | 14050 | 0.0004 | - |
| 0.3961 | 14100 | 0.0003 | - |
| 0.3975 | 14150 | 0.0003 | - |
| 0.3989 | 14200 | 0.0003 | - |
| 0.4003 | 14250 | 0.0002 | - |
| 0.4017 | 14300 | 0.0003 | - |
| 0.4031 | 14350 | 0.0002 | - |
| 0.4045 | 14400 | 0.0003 | - |
| 0.4059 | 14450 | 0.0002 | - |
| 0.4073 | 14500 | 0.0002 | - |
| 0.4088 | 14550 | 0.0002 | - |
| 0.4102 | 14600 | 0.0002 | - |
| 0.4116 | 14650 | 0.0002 | - |
| 0.4130 | 14700 | 0.0002 | - |
| 0.4144 | 14750 | 0.0004 | - |
| 0.4158 | 14800 | 0.0002 | - |
| 0.4172 | 14850 | 0.0002 | - |
| 0.4186 | 14900 | 0.0002 | - |
| 0.4200 | 14950 | 0.0002 | - |
| 0.4214 | 15000 | 0.0003 | - |
| 0.4228 | 15050 | 0.0002 | - |
| 0.4242 | 15100 | 0.0003 | - |
| 0.4256 | 15150 | 0.0002 | - |
| 0.4270 | 15200 | 0.0003 | - |
| 0.4284 | 15250 | 0.0003 | - |
| 0.4298 | 15300 | 0.0003 | - |
| 0.4312 | 15350 | 0.0013 | - |
| 0.4326 | 15400 | 0.0002 | - |
| 0.4340 | 15450 | 0.0002 | - |
| 0.4354 | 15500 | 0.0003 | - |
| 0.4368 | 15550 | 0.0003 | - |
| 0.4383 | 15600 | 0.0002 | - |
| 0.4397 | 15650 | 0.0002 | - |
| 0.4411 | 15700 | 0.0002 | - |
| 0.4425 | 15750 | 0.0002 | - |
| 0.4439 | 15800 | 0.0003 | - |
| 0.4453 | 15850 | 0.0001 | - |
| 0.4467 | 15900 | 0.0003 | - |
| 0.4481 | 15950 | 0.0002 | - |
| 0.4495 | 16000 | 0.0001 | - |
| 0.4509 | 16050 | 0.0003 | - |
| 0.4523 | 16100 | 0.0003 | - |
| 0.4537 | 16150 | 0.0003 | - |
| 0.4551 | 16200 | 0.0002 | - |
| 0.4565 | 16250 | 0.0001 | - |
| 0.4579 | 16300 | 0.0001 | - |
| 0.4593 | 16350 | 0.0001 | - |
| 0.4607 | 16400 | 0.0003 | - |
| 0.4621 | 16450 | 0.0002 | - |
| 0.4635 | 16500 | 0.0002 | - |
| 0.4649 | 16550 | 0.0002 | - |
| 0.4663 | 16600 | 0.0003 | - |
| 0.4677 | 16650 | 0.0002 | - |
| 0.4692 | 16700 | 0.0003 | - |
| 0.4706 | 16750 | 0.0002 | - |
| 0.4720 | 16800 | 0.0002 | - |
| 0.4734 | 16850 | 0.0002 | - |
| 0.4748 | 16900 | 0.0002 | - |
| 0.4762 | 16950 | 0.0003 | - |
| 0.4776 | 17000 | 0.0002 | - |
| 0.4790 | 17050 | 0.0002 | - |
| 0.4804 | 17100 | 0.0003 | - |
| 0.4818 | 17150 | 0.0001 | - |
| 0.4832 | 17200 | 0.0002 | - |
| 0.4846 | 17250 | 0.0002 | - |
| 0.4860 | 17300 | 0.0002 | - |
| 0.4874 | 17350 | 0.0001 | - |
| 0.4888 | 17400 | 0.0002 | - |
| 0.4902 | 17450 | 0.0002 | - |
| 0.4916 | 17500 | 0.0002 | - |
| 0.4930 | 17550 | 0.0002 | - |
| 0.4944 | 17600 | 0.0002 | - |
| 0.4958 | 17650 | 0.0003 | - |
| 0.4972 | 17700 | 0.0003 | - |
| 0.4987 | 17750 | 0.0002 | - |
| 0.5001 | 17800 | 0.0001 | - |
| 0.5015 | 17850 | 0.0002 | - |
| 0.5029 | 17900 | 0.0003 | - |
| 0.5043 | 17950 | 0.0002 | - |
| 0.5057 | 18000 | 0.0001 | - |
| 0.5071 | 18050 | 0.0003 | - |
| 0.5085 | 18100 | 0.0004 | - |
| 0.5099 | 18150 | 0.0002 | - |
| 0.5113 | 18200 | 0.0002 | - |
| 0.5127 | 18250 | 0.0002 | - |
| 0.5141 | 18300 | 0.0002 | - |
| 0.5155 | 18350 | 0.0002 | - |
| 0.5169 | 18400 | 0.0001 | - |
| 0.5183 | 18450 | 0.0001 | - |
| 0.5197 | 18500 | 0.0002 | - |
| 0.5211 | 18550 | 0.0002 | - |
| 0.5225 | 18600 | 0.0618 | - |
| 0.5239 | 18650 | 0.0003 | - |
| 0.5253 | 18700 | 0.0003 | - |
| 0.5267 | 18750 | 0.0002 | - |
| 0.5281 | 18800 | 0.0002 | - |
| 0.5296 | 18850 | 0.0002 | - |
| 0.5310 | 18900 | 0.0001 | - |
| 0.5324 | 18950 | 0.0002 | - |
| 0.5338 | 19000 | 0.0002 | - |
| 0.5352 | 19050 | 0.0003 | - |
| 0.5366 | 19100 | 0.0002 | - |
| 0.5380 | 19150 | 0.0002 | - |
| 0.5394 | 19200 | 0.0001 | - |
| 0.5408 | 19250 | 0.0003 | - |
| 0.5422 | 19300 | 0.0003 | - |
| 0.5436 | 19350 | 0.0002 | - |
| 0.5450 | 19400 | 0.0002 | - |
| 0.5464 | 19450 | 0.0002 | - |
| 0.5478 | 19500 | 0.0002 | - |
| 0.5492 | 19550 | 0.0002 | - |
| 0.5506 | 19600 | 0.0001 | - |
| 0.5520 | 19650 | 0.0002 | - |
| 0.5534 | 19700 | 0.0003 | - |
| 0.5548 | 19750 | 0.0002 | - |
| 0.5562 | 19800 | 0.0003 | - |
| 0.5576 | 19850 | 0.0002 | - |
| 0.5591 | 19900 | 0.0001 | - |
| 0.5605 | 19950 | 0.0001 | - |
| 0.5619 | 20000 | 0.0001 | - |
| 0.5633 | 20050 | 0.0002 | - |
| 0.5647 | 20100 | 0.0002 | - |
| 0.5661 | 20150 | 0.0002 | - |
| 0.5675 | 20200 | 0.0002 | - |
| 0.5689 | 20250 | 0.0002 | - |
| 0.5703 | 20300 | 0.0002 | - |
| 0.5717 | 20350 | 0.0001 | - |
| 0.5731 | 20400 | 0.0001 | - |
| 0.5745 | 20450 | 0.0002 | - |
| 0.5759 | 20500 | 0.0002 | - |
| 0.5773 | 20550 | 0.0001 | - |
| 0.5787 | 20600 | 0.0001 | - |
| 0.5801 | 20650 | 0.0002 | - |
| 0.5815 | 20700 | 0.0001 | - |
| 0.5829 | 20750 | 0.0002 | - |
| 0.5843 | 20800 | 0.0001 | - |
| 0.5857 | 20850 | 0.0002 | - |
| 0.5871 | 20900 | 0.0002 | - |
| 0.5885 | 20950 | 0.0001 | - |
| 0.5900 | 21000 | 0.0001 | - |
| 0.5914 | 21050 | 0.0001 | - |
| 0.5928 | 21100 | 0.0002 | - |
| 0.5942 | 21150 | 0.0002 | - |
| 0.5956 | 21200 | 0.0001 | - |
| 0.5970 | 21250 | 0.0002 | - |
| 0.5984 | 21300 | 0.0001 | - |
| 0.5998 | 21350 | 0.0002 | - |
| 0.6012 | 21400 | 0.0002 | - |
| 0.6026 | 21450 | 0.0002 | - |
| 0.6040 | 21500 | 0.0003 | - |
| 0.6054 | 21550 | 0.0002 | - |
| 0.6068 | 21600 | 0.0002 | - |
| 0.6082 | 21650 | 0.0003 | - |
| 0.6096 | 21700 | 0.0002 | - |
| 0.6110 | 21750 | 0.0001 | - |
| 0.6124 | 21800 | 0.0003 | - |
| 0.6138 | 21850 | 0.0001 | - |
| 0.6152 | 21900 | 0.0002 | - |
| 0.6166 | 21950 | 0.0001 | - |
| 0.6180 | 22000 | 0.0002 | - |
| 0.6195 | 22050 | 0.0002 | - |
| 0.6209 | 22100 | 0.0001 | - |
| 0.6223 | 22150 | 0.0002 | - |
| 0.6237 | 22200 | 0.0001 | - |
| 0.6251 | 22250 | 0.0002 | - |
| 0.6265 | 22300 | 0.0002 | - |
| 0.6279 | 22350 | 0.0001 | - |
| 0.6293 | 22400 | 0.0002 | - |
| 0.6307 | 22450 | 0.0003 | - |
| 0.6321 | 22500 | 0.0001 | - |
| 0.6335 | 22550 | 0.0002 | - |
| 0.6349 | 22600 | 0.0001 | - |
| 0.6363 | 22650 | 0.0002 | - |
| 0.6377 | 22700 | 0.0002 | - |
| 0.6391 | 22750 | 0.0001 | - |
| 0.6405 | 22800 | 0.0002 | - |
| 0.6419 | 22850 | 0.0002 | - |
| 0.6433 | 22900 | 0.0002 | - |
| 0.6447 | 22950 | 0.0002 | - |
| 0.6461 | 23000 | 0.0003 | - |
| 0.6475 | 23050 | 0.0002 | - |
| 0.6489 | 23100 | 0.0001 | - |
| 0.6504 | 23150 | 0.0002 | - |
| 0.6518 | 23200 | 0.0001 | - |
| 0.6532 | 23250 | 0.0002 | - |
| 0.6546 | 23300 | 0.0001 | - |
| 0.6560 | 23350 | 0.0002 | - |
| 0.6574 | 23400 | 0.0003 | - |
| 0.6588 | 23450 | 0.0002 | - |
| 0.6602 | 23500 | 0.0002 | - |
| 0.6616 | 23550 | 0.0001 | - |
| 0.6630 | 23600 | 0.0003 | - |
| 0.6644 | 23650 | 0.0002 | - |
| 0.6658 | 23700 | 0.0001 | - |
| 0.6672 | 23750 | 0.0002 | - |
| 0.6686 | 23800 | 0.0001 | - |
| 0.6700 | 23850 | 0.0001 | - |
| 0.6714 | 23900 | 0.0002 | - |
| 0.6728 | 23950 | 0.0002 | - |
| 0.6742 | 24000 | 0.0002 | - |
| 0.6756 | 24050 | 0.0002 | - |
| 0.6770 | 24100 | 0.0001 | - |
| 0.6784 | 24150 | 0.0002 | - |
| 0.6799 | 24200 | 0.0002 | - |
| 0.6813 | 24250 | 0.0002 | - |
| 0.6827 | 24300 | 0.0001 | - |
| 0.6841 | 24350 | 0.0002 | - |
| 0.6855 | 24400 | 0.0002 | - |
| 0.6869 | 24450 | 0.0001 | - |
| 0.6883 | 24500 | 0.0001 | - |
| 0.6897 | 24550 | 0.0002 | - |
| 0.6911 | 24600 | 0.0001 | - |
| 0.6925 | 24650 | 0.0002 | - |
| 0.6939 | 24700 | 0.0001 | - |
| 0.6953 | 24750 | 0.0003 | - |
| 0.6967 | 24800 | 0.0001 | - |
| 0.6981 | 24850 | 0.0002 | - |
| 0.6995 | 24900 | 0.0001 | - |
| 0.7009 | 24950 | 0.0001 | - |
| 0.7023 | 25000 | 0.0002 | - |
| 0.7037 | 25050 | 0.0001 | - |
| 0.7051 | 25100 | 0.0002 | - |
| 0.7065 | 25150 | 0.0001 | - |
| 0.7079 | 25200 | 0.0002 | - |
| 0.7093 | 25250 | 0.0002 | - |
| 0.7108 | 25300 | 0.0001 | - |
| 0.7122 | 25350 | 0.0002 | - |
| 0.7136 | 25400 | 0.0001 | - |
| 0.7150 | 25450 | 0.0001 | - |
| 0.7164 | 25500 | 0.0001 | - |
| 0.7178 | 25550 | 0.0001 | - |
| 0.7192 | 25600 | 0.0002 | - |
| 0.7206 | 25650 | 0.0002 | - |
| 0.7220 | 25700 | 0.0001 | - |
| 0.7234 | 25750 | 0.0001 | - |
| 0.7248 | 25800 | 0.0001 | - |
| 0.7262 | 25850 | 0.0002 | - |
| 0.7276 | 25900 | 0.0002 | - |
| 0.7290 | 25950 | 0.0001 | - |
| 0.7304 | 26000 | 0.0001 | - |
| 0.7318 | 26050 | 0.0002 | - |
| 0.7332 | 26100 | 0.0001 | - |
| 0.7346 | 26150 | 0.0001 | - |
| 0.7360 | 26200 | 0.0001 | - |
| 0.7374 | 26250 | 0.0001 | - |
| 0.7388 | 26300 | 0.0001 | - |
| 0.7403 | 26350 | 0.0002 | - |
| 0.7417 | 26400 | 0.0002 | - |
| 0.7431 | 26450 | 0.0001 | - |
| 0.7445 | 26500 | 0.0002 | - |
| 0.7459 | 26550 | 0.0001 | - |
| 0.7473 | 26600 | 0.0001 | - |
| 0.7487 | 26650 | 0.0002 | - |
| 0.7501 | 26700 | 0.0001 | - |
| 0.7515 | 26750 | 0.0001 | - |
| 0.7529 | 26800 | 0.0001 | - |
| 0.7543 | 26850 | 0.0001 | - |
| 0.7557 | 26900 | 0.0001 | - |
| 0.7571 | 26950 | 0.0001 | - |
| 0.7585 | 27000 | 0.0002 | - |
| 0.7599 | 27050 | 0.0001 | - |
| 0.7613 | 27100 | 0.0002 | - |
| 0.7627 | 27150 | 0.0002 | - |
| 0.7641 | 27200 | 0.0001 | - |
| 0.7655 | 27250 | 0.0002 | - |
| 0.7669 | 27300 | 0.0001 | - |
| 0.7683 | 27350 | 0.0002 | - |
| 0.7697 | 27400 | 0.0001 | - |
| 0.7712 | 27450 | 0.0002 | - |
| 0.7726 | 27500 | 0.0001 | - |
| 0.7740 | 27550 | 0.0001 | - |
| 0.7754 | 27600 | 0.0001 | - |
| 0.7768 | 27650 | 0.0001 | - |
| 0.7782 | 27700 | 0.0001 | - |
| 0.7796 | 27750 | 0.0001 | - |
| 0.7810 | 27800 | 0.0001 | - |
| 0.7824 | 27850 | 0.0001 | - |
| 0.7838 | 27900 | 0.0001 | - |
| 0.7852 | 27950 | 0.0001 | - |
| 0.7866 | 28000 | 0.0001 | - |
| 0.7880 | 28050 | 0.0001 | - |
| 0.7894 | 28100 | 0.0001 | - |
| 0.7908 | 28150 | 0.0001 | - |
| 0.7922 | 28200 | 0.0001 | - |
| 0.7936 | 28250 | 0.0002 | - |
| 0.7950 | 28300 | 0.0002 | - |
| 0.7964 | 28350 | 0.0001 | - |
| 0.7978 | 28400 | 0.0002 | - |
| 0.7992 | 28450 | 0.0001 | - |
| 0.8007 | 28500 | 0.0001 | - |
| 0.8021 | 28550 | 0.0001 | - |
| 0.8035 | 28600 | 0.0001 | - |
| 0.8049 | 28650 | 0.0002 | - |
| 0.8063 | 28700 | 0.0001 | - |
| 0.8077 | 28750 | 0.0002 | - |
| 0.8091 | 28800 | 0.0001 | - |
| 0.8105 | 28850 | 0.0001 | - |
| 0.8119 | 28900 | 0.0001 | - |
| 0.8133 | 28950 | 0.0002 | - |
| 0.8147 | 29000 | 0.0001 | - |
| 0.8161 | 29050 | 0.0002 | - |
| 0.8175 | 29100 | 0.0002 | - |
| 0.8189 | 29150 | 0.0002 | - |
| 0.8203 | 29200 | 0.0001 | - |
| 0.8217 | 29250 | 0.0002 | - |
| 0.8231 | 29300 | 0.0001 | - |
| 0.8245 | 29350 | 0.0001 | - |
| 0.8259 | 29400 | 0.0001 | - |
| 0.8273 | 29450 | 0.0002 | - |
| 0.8287 | 29500 | 0.0001 | - |
| 0.8301 | 29550 | 0.0002 | - |
| 0.8316 | 29600 | 0.0001 | - |
| 0.8330 | 29650 | 0.0001 | - |
| 0.8344 | 29700 | 0.0001 | - |
| 0.8358 | 29750 | 0.0001 | - |
| 0.8372 | 29800 | 0.0001 | - |
| 0.8386 | 29850 | 0.0001 | - |
| 0.8400 | 29900 | 0.0001 | - |
| 0.8414 | 29950 | 0.0002 | - |
| 0.8428 | 30000 | 0.0002 | - |
| 0.8442 | 30050 | 0.0001 | - |
| 0.8456 | 30100 | 0.0001 | - |
| 0.8470 | 30150 | 0.0001 | - |
| 0.8484 | 30200 | 0.0001 | - |
| 0.8498 | 30250 | 0.0001 | - |
| 0.8512 | 30300 | 0.0001 | - |
| 0.8526 | 30350 | 0.0001 | - |
| 0.8540 | 30400 | 0.0001 | - |
| 0.8554 | 30450 | 0.0002 | - |
| 0.8568 | 30500 | 0.0001 | - |
| 0.8582 | 30550 | 0.0001 | - |
| 0.8596 | 30600 | 0.0 | - |
| 0.8611 | 30650 | 0.0001 | - |
| 0.8625 | 30700 | 0.0002 | - |
| 0.8639 | 30750 | 0.0002 | - |
| 0.8653 | 30800 | 0.0002 | - |
| 0.8667 | 30850 | 0.0001 | - |
| 0.8681 | 30900 | 0.0002 | - |
| 0.8695 | 30950 | 0.0001 | - |
| 0.8709 | 31000 | 0.0001 | - |
| 0.8723 | 31050 | 0.0001 | - |
| 0.8737 | 31100 | 0.0002 | - |
| 0.8751 | 31150 | 0.0002 | - |
| 0.8765 | 31200 | 0.0001 | - |
| 0.8779 | 31250 | 0.0001 | - |
| 0.8793 | 31300 | 0.0001 | - |
| 0.8807 | 31350 | 0.0001 | - |
| 0.8821 | 31400 | 0.0001 | - |
| 0.8835 | 31450 | 0.0001 | - |
| 0.8849 | 31500 | 0.0001 | - |
| 0.8863 | 31550 | 0.0002 | - |
| 0.8877 | 31600 | 0.0001 | - |
| 0.8891 | 31650 | 0.0001 | - |
| 0.8905 | 31700 | 0.0002 | - |
| 0.8920 | 31750 | 0.0001 | - |
| 0.8934 | 31800 | 0.0001 | - |
| 0.8948 | 31850 | 0.0001 | - |
| 0.8962 | 31900 | 0.0003 | - |
| 0.8976 | 31950 | 0.0002 | - |
| 0.8990 | 32000 | 0.0002 | - |
| 0.9004 | 32050 | 0.0001 | - |
| 0.9018 | 32100 | 0.0001 | - |
| 0.9032 | 32150 | 0.0002 | - |
| 0.9046 | 32200 | 0.0003 | - |
| 0.9060 | 32250 | 0.0001 | - |
| 0.9074 | 32300 | 0.0002 | - |
| 0.9088 | 32350 | 0.0001 | - |
| 0.9102 | 32400 | 0.0002 | - |
| 0.9116 | 32450 | 0.0002 | - |
| 0.9130 | 32500 | 0.0001 | - |
| 0.9144 | 32550 | 0.0001 | - |
| 0.9158 | 32600 | 0.0001 | - |
| 0.9172 | 32650 | 0.0001 | - |
| 0.9186 | 32700 | 0.0001 | - |
| 0.9200 | 32750 | 0.0001 | - |
| 0.9215 | 32800 | 0.0001 | - |
| 0.9229 | 32850 | 0.0001 | - |
| 0.9243 | 32900 | 0.0001 | - |
| 0.9257 | 32950 | 0.0001 | - |
| 0.9271 | 33000 | 0.0001 | - |
| 0.9285 | 33050 | 0.0002 | - |
| 0.9299 | 33100 | 0.0001 | - |
| 0.9313 | 33150 | 0.0002 | - |
| 0.9327 | 33200 | 0.0001 | - |
| 0.9341 | 33250 | 0.0001 | - |
| 0.9355 | 33300 | 0.0002 | - |
| 0.9369 | 33350 | 0.0001 | - |
| 0.9383 | 33400 | 0.0001 | - |
| 0.9397 | 33450 | 0.0001 | - |
| 0.9411 | 33500 | 0.0001 | - |
| 0.9425 | 33550 | 0.0001 | - |
| 0.9439 | 33600 | 0.0001 | - |
| 0.9453 | 33650 | 0.0001 | - |
| 0.9467 | 33700 | 0.0002 | - |
| 0.9481 | 33750 | 0.0001 | - |
| 0.9495 | 33800 | 0.0001 | - |
| 0.9509 | 33850 | 0.0002 | - |
| 0.9524 | 33900 | 0.0001 | - |
| 0.9538 | 33950 | 0.0001 | - |
| 0.9552 | 34000 | 0.0002 | - |
| 0.9566 | 34050 | 0.0001 | - |
| 0.9580 | 34100 | 0.0001 | - |
| 0.9594 | 34150 | 0.0001 | - |
| 0.9608 | 34200 | 0.0002 | - |
| 0.9622 | 34250 | 0.0001 | - |
| 0.9636 | 34300 | 0.0001 | - |
| 0.9650 | 34350 | 0.0001 | - |
| 0.9664 | 34400 | 0.0001 | - |
| 0.9678 | 34450 | 0.0003 | - |
| 0.9692 | 34500 | 0.0001 | - |
| 0.9706 | 34550 | 0.0001 | - |
| 0.9720 | 34600 | 0.0001 | - |
| 0.9734 | 34650 | 0.0001 | - |
| 0.9748 | 34700 | 0.0001 | - |
| 0.9762 | 34750 | 0.0001 | - |
| 0.9776 | 34800 | 0.0002 | - |
| 0.9790 | 34850 | 0.0001 | - |
| 0.9804 | 34900 | 0.0002 | - |
| 0.9819 | 34950 | 0.0001 | - |
| 0.9833 | 35000 | 0.0002 | - |
| 0.9847 | 35050 | 0.0001 | - |
| 0.9861 | 35100 | 0.0001 | - |
| 0.9875 | 35150 | 0.0001 | - |
| 0.9889 | 35200 | 0.0001 | - |
| 0.9903 | 35250 | 0.0001 | - |
| 0.9917 | 35300 | 0.0001 | - |
| 0.9931 | 35350 | 0.0001 | - |
| 0.9945 | 35400 | 0.0001 | - |
| 0.9959 | 35450 | 0.0001 | - |
| 0.9973 | 35500 | 0.0001 | - |
| 0.9987 | 35550 | 0.0001 | - |
| **1.0** | **35596** | **-** | **0.0121** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.9
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.42.4
- PyTorch: 2.4.0+cu121
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-small-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Show me data_asset_kpi_cf group by quarter."}, {"text": "i want to get trend analysis and group by product"}, {"text": "Can I have data_asset_kpi_bs details."}, {"text": "I don't want to produce that specific data."}, {"text": "What are the details of the orders placed before December 31st, 2023?"}], "inference": true, "model-index": [{"name": "SetFit with BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9915254237288136, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,339 |
gokulsrinivasagan/distilbert_base_train_cola
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_base_train",
"base_model:finetune:gokulsrinivasagan/distilbert_base_train",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-27T20:23:34Z |
2025-01-27T20:25:02+00:00
| 5 | 0 |
---
base_model: gokulsrinivasagan/distilbert_base_train
datasets:
- glue
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- matthews_correlation
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert_base_train_cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.06558874629318973
name: Matthews Correlation
- type: accuracy
value: 0.693192720413208
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_base_train_cola
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_base_train](https://huggingface.co/gokulsrinivasagan/distilbert_base_train) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6206
- Matthews Correlation: 0.0656
- Accuracy: 0.6932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6147 | 1.0 | 34 | 0.6206 | 0.0656 | 0.6932 |
| 0.5787 | 2.0 | 68 | 0.6306 | 0.0791 | 0.6826 |
| 0.5293 | 3.0 | 102 | 0.6425 | 0.0809 | 0.6798 |
| 0.477 | 4.0 | 136 | 0.6990 | 0.1211 | 0.6788 |
| 0.4259 | 5.0 | 170 | 0.7281 | 0.0835 | 0.6577 |
| 0.3818 | 6.0 | 204 | 0.8436 | 0.0718 | 0.6481 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_base_train_cola
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_base_train](https://huggingface.co/gokulsrinivasagan/distilbert_base_train) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6206
- Matthews Correlation: 0.0656
- Accuracy: 0.6932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6147 | 1.0 | 34 | 0.6206 | 0.0656 | 0.6932 |
| 0.5787 | 2.0 | 68 | 0.6306 | 0.0791 | 0.6826 |
| 0.5293 | 3.0 | 102 | 0.6425 | 0.0809 | 0.6798 |
| 0.477 | 4.0 | 136 | 0.6990 | 0.1211 | 0.6788 |
| 0.4259 | 5.0 | 170 | 0.7281 | 0.0835 | 0.6577 |
| 0.3818 | 6.0 | 204 | 0.8436 | 0.0718 | 0.6481 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/distilbert_base_train", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["matthews_correlation", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_base_train_cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.06558874629318973, "name": "Matthews Correlation"}, {"type": "accuracy", "value": 0.693192720413208, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,340 |
ibm-research/re2g-qry-encoder-triviaqa
|
ibm-research
|
feature-extraction
|
[
"transformers",
"pytorch",
"dpr",
"feature-extraction",
"information retrieval",
"reranking",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-07-29T18:24:45Z |
2023-05-16T14:43:56+00:00
| 12 | 0 |
---
license: apache-2.0
tags:
- information retrieval
- reranking
---
# Model Card for TriviaQA Question Encoder in Re2G
# Model Details
> The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output.
<img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%">
## Training, Evaluation and Inference
The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g).
## Usage
The best way to use the model is by adapting the [dpr_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/dpr/dpr_apply.py)
## Citation
```
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
## Model Description
The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf):
> As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.
- **Developed by:** IBM
- **Shared by [Optional]:** IBM
- **Model type:** Query/Passage Reranker
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Parent Model:** [dpr-question_encoder-multiset-base](https://huggingface.co/facebook/dpr-question_encoder-multiset-base)
- **Resources for more information:**
- [GitHub Repo](https://github.com/IBM/kgi-slot-filling)
- [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf)
# Uses
## Direct Use
This model can be used for the task of encoding a question to a vector to be used as a query into an Approximate Nearest Neighbors index. It must be used in combination with a context encoder that encodes passages to a vector and indexes them.
# Citation
**BibTeX:**
```bibtex
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
| null |
Non_BioNLP
|
# Model Card for TriviaQA Question Encoder in Re2G
# Model Details
> The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output.
<img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%">
## Training, Evaluation and Inference
The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g).
## Usage
The best way to use the model is by adapting the [dpr_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/dpr/dpr_apply.py)
## Citation
```
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
## Model Description
The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf):
> As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.
- **Developed by:** IBM
- **Shared by [Optional]:** IBM
- **Model type:** Query/Passage Reranker
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Parent Model:** [dpr-question_encoder-multiset-base](https://huggingface.co/facebook/dpr-question_encoder-multiset-base)
- **Resources for more information:**
- [GitHub Repo](https://github.com/IBM/kgi-slot-filling)
- [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf)
# Uses
## Direct Use
This model can be used for the task of encoding a question to a vector to be used as a query into an Approximate Nearest Neighbors index. It must be used in combination with a context encoder that encodes passages to a vector and indexes them.
# Citation
**BibTeX:**
```bibtex
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
|
{"license": "apache-2.0", "tags": ["information retrieval", "reranking"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,341 |
parvinroshni/remote-therapist
|
parvinroshni
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"arxiv:2405.16406",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-20T09:26:16Z |
2025-02-20T11:45:58+00:00
| 299 | 0 |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
| null |
Non_BioNLP
|
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\n“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\n“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use. \ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law\n 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:\n 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled substances\n 11. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:\n 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 16. Generating, promoting, or further distributing spam\n 17. Impersonating another individual without consent, authorization, or legal right\n 18. Representing that the use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.\n\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
|
task
|
[
"SUMMARIZATION"
] | 45,342 |
TransferGraph/connectivity_bert_ft_qqp-25-finetuned-lora-tweet_eval_emotion
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:connectivity/bert_ft_qqp-25",
"base_model:adapter:connectivity/bert_ft_qqp-25",
"model-index",
"region:us"
] | 2024-02-29T12:50:20Z |
2024-02-29T12:50:22+00:00
| 0 | 0 |
---
base_model: connectivity/bert_ft_qqp-25
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: connectivity_bert_ft_qqp-25-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6550802139037433
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# connectivity_bert_ft_qqp-25-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [connectivity/bert_ft_qqp-25](https://huggingface.co/connectivity/bert_ft_qqp-25) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4118 | None | 0 |
| 0.4599 | 1.2572 | 0 |
| 0.6043 | 1.0925 | 1 |
| 0.6524 | 0.9469 | 2 |
| 0.6551 | 0.8938 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# connectivity_bert_ft_qqp-25-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [connectivity/bert_ft_qqp-25](https://huggingface.co/connectivity/bert_ft_qqp-25) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4118 | None | 0 |
| 0.4599 | 1.2572 | 0 |
| 0.6043 | 1.0925 | 1 |
| 0.6524 | 0.9469 | 2 |
| 0.6551 | 0.8938 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "connectivity/bert_ft_qqp-25", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "connectivity_bert_ft_qqp-25-finetuned-lora-tweet_eval_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "validation", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.6550802139037433, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,343 |
Helsinki-NLP/opus-mt-fi-wls
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fi",
"wls",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:35:51+00:00
| 275 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-fi-wls
* source languages: fi
* target languages: wls
* OPUS readme: [fi-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-wls/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.wls | 24.7 | 0.466 |
| null |
Non_BioNLP
|
### opus-mt-fi-wls
* source languages: fi
* target languages: wls
* OPUS readme: [fi-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-wls/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.wls | 24.7 | 0.466 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 45,345 |
Decius/botpress_Vaganet_new_model
|
Decius
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"region:us"
] | 2024-10-03T14:03:31Z |
2024-10-03T14:03:52+00:00
| 0 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Quelles sont les étapes pour transférer le générateur?
- text: Pouvez-vous fournir l'historique d'utilisation du générateur?
- text: L'ordinateur portable est-il libre à l'instant?
- text: Pouvez-vous fournir une mise à jour sur le statut de sécurité du chariot élévateur?
- text: What upgrades have been applied to the printer?
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8897058823529411
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 34 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| request_asset_utilization_report | <ul><li>'How can I request a utilization report for the projector?'</li><li>"Comment puis-je demander un rapport d'utilisation pour le projecteur?"</li><li>'How do I request a usage report for the printer?'</li></ul> |
| request_asset_transfer | <ul><li>'What’s the procedure for transferring the forklift?'</li><li>'Comment initier une demande de transfert pour la voiture?'</li><li>'What are the steps to transfer the generator?'</li></ul> |
| check_asset_functionality | <ul><li>'Can you check if the laptop is operating correctly?'</li><li>'Pouvez-vous vérifier la fonctionnalité du chariot élévateur?'</li><li>"Quelle est la performance de l'imprimante?"</li></ul> |
| request_asset_status | <ul><li>"Pouvez-vous fournir le rapport de statut de l'imprimante?"</li><li>'Can you provide the status report for the printer?'</li><li>'Can you check the current status of the server?'</li></ul> |
| request_asset_repair | <ul><li>'What’s the procedure for getting the server repaired?'</li><li>'How can I initiate a repair for the laptop?'</li><li>"Pouvez-vous m'aider avec la demande de réparation pour le générateur?"</li></ul> |
| check_asset_certification | <ul><li>"Quel est le statut actuel de la certification de l'imprimante?"</li><li>'Can you verify the certification status of the forklift?'</li><li>'Pouvez-vous vérifier le statut de certification du chariot élévateur?'</li></ul> |
| check_asset_security_status | <ul><li>"Pouvez-vous vérifier le statut de sécurité de l'imprimante?"</li><li>"Comment la sécurité de l'ordinateur portable est-elle gérée?"</li><li>'Can you check the security status of the printer?'</li></ul> |
| check_asset_upgrades | <ul><li>'Can you check if the laptop has been upgraded recently?'</li><li>"Quelles mises à jour ont été appliquées à l'imprimante?"</li><li>'À quel point le projecteur est-il à jour avec ses mises à jour?'</li></ul> |
| request_asset_inventory_update | <ul><li>'Has the inventory for the air conditioner been updated recently?'</li><li>'What’s the latest inventory status for the generator?'</li><li>"Comment puis-je demander une mise à jour pour l'inventaire du serveur?"</li></ul> |
| check_asset_location_update | <ul><li>'Pouvez-vous fournir les derniers détails de localisation pour le générateur?'</li><li>"Quelle est la localisation actuelle de l'avion?"</li><li>'Pouvez-vous vérifier si la localisation du serveur a changé?'</li></ul> |
| report_asset_issue | <ul><li>'How can I report a problem with the generator?'</li><li>"Pouvez-vous m'aider à signaler un problème avec l'avion?"</li><li>'Comment puis-je signaler un problème avec le générateur?'</li></ul> |
| request_work_order | <ul><li>'Comment puis-je initier un ordre de travail pour le climatiseur?'</li><li>'What’s the process for creating a work order for the forklift?'</li><li>'How do I request a work order for the generator?'</li></ul> |
| request_maintenance_schedule | <ul><li>'Quand est prévu le prochain entretien pour la voiture?'</li><li>'When is the next maintenance for the car scheduled?'</li><li>"Pouvez-vous me donner le calendrier de maintenance de l'avion?"</li></ul> |
| greet-hi | <ul><li>'Bonsoir'</li><li>'Salut'</li><li>'Hey'</li></ul> |
| check_asset_maintenance_history | <ul><li>"Pouvez-vous me donner un résumé de l'historique de maintenance du projecteur?"</li><li>'What’s the maintenance history like for the forklift?'</li><li>'Quel est le dossier de maintenance du serveur?'</li></ul> |
| request_asset_depreciation_info | <ul><li>'Can I get a report on the projector’s depreciation?'</li><li>"Quel est le statut de dépréciation de l'ordinateur?"</li><li>"Quelle est la dépréciation actuelle de l'avion?"</li></ul> |
| request_asset_funding_status | <ul><li>"Comment le financement de l'ordinateur portable est-il géré?"</li><li>'Le chariot élévateur est-il financé?'</li><li>'Can you give me details on the funding status for the server?'</li></ul> |
| check_asset_usage_history | <ul><li>"Pouvez-vous me donner un rapport sur l'historique d'utilisation du serveur?"</li><li>'What’s the historical usage data for the forklift?'</li><li>"Quelles sont les données historiques d'utilisation du chariot élévateur?"</li></ul> |
| request_asset_inventory | <ul><li>'What’s the current inventory status of the car?'</li><li>'What’s the inventory record for the airplane?'</li><li>"Pouvez-vous me donner le rapport d'inventaire pour le générateur?"</li></ul> |
| request_asset_location | <ul><li>"Où puis-je trouver l'avion?"</li><li>'Pouvez-vous me dire où se trouve la voiture?'</li><li>'Where is the projector situated now?'</li></ul> |
| request_asset_upgrades | <ul><li>'How do I apply for an upgrade for the forklift?'</li><li>"Quelle est la procédure pour mettre à niveau l'ordinateur portable?"</li><li>'Quelles étapes sont nécessaires pour demander une mise à niveau pour le générateur?'</li></ul> |
| request_asset_maintenance_history | <ul><li>'What’s the historical maintenance data for the printer?'</li><li>"Comment puis-je accéder à l'historique de maintenance du serveur?"</li><li>'Pouvez-vous me donner un résumé de la maintenance du chariot élévateur?'</li></ul> |
| check_asset_warranty | <ul><li>'How long is the generator covered by warranty?'</li><li>"Pouvez-vous vérifier le statut de garantie de l'ordinateur portable?"</li><li>'Quelle est la période de garantie actuelle du chariot élévateur?'</li></ul> |
| greet-who_are_you | <ul><li>"Aidez-moi, s'il vous plaît"</li><li>"comment pouvez vous m'aider"</li><li>'how can you help me'</li></ul> |
| request_asset_replacement_info | <ul><li>'What are the steps involved in replacing the forklift?'</li><li>"Comment puis-je obtenir des détails sur le remplacement de l'imprimante?"</li><li>'Pouvez-vous me guider à travers le processus de remplacement du climatiseur?'</li></ul> |
| request_asset_replacement | <ul><li>"Quelle est la procédure pour remplacer l'ordinateur portable?"</li><li>'Can you help me with replacing the printer?'</li><li>'Je dois remplacer le chariot élévateur; quelles étapes dois-je suivre?'</li></ul> |
| check_asset_availability | <ul><li>'Puis-je utiliser la voiture en ce moment?'</li><li>'Pouvez-vous vérifier la disponibilité du chariot élévateur?'</li><li>'Is the laptop free for use at the moment?'</li></ul> |
| check_asset_history | <ul><li>'What’s the historical usage data for the projector?'</li><li>"Quelles sont les données d'utilisation historique du projecteur?"</li><li>"Comment puis-je accéder à l'historique du serveur?"</li></ul> |
| check_asset_compatibility | <ul><li>"Pouvez-vous vérifier si le serveur s'intègre bien avec le système?"</li><li>'Pouvez-vous fournir des détails sur la compatibilité du générateur avec le système?'</li><li>"Quel est le statut de compatibilité de l'imprimante?"</li></ul> |
| request_asset_efficiency | <ul><li>"Quel est le niveau d'efficacité actuel du serveur?"</li><li>'What’s the current efficiency level of the server?'</li><li>"Quelle est l'efficacité du générateur dans ses opérations?"</li></ul> |
| request_asset_information | <ul><li>'Pouvez-vous me donner un résumé des détails du générateur?'</li><li>"J'ai besoin d'informations sur l'ordinateur; pouvez-vous m'aider?"</li><li>'Can you tell me more about the tablet?'</li></ul> |
| request_asset_cost_analysis | <ul><li>"Combien coûte l'entretien de la voiture?"</li><li>'What are the expenses associated with the laptop?'</li><li>"Pouvez-vous me donner un rapport détaillé des coûts pour l'imprimante?"</li></ul> |
| check_asset_support_contract | <ul><li>"Pouvez-vous vérifier le statut du contrat de support pour l'imprimante?"</li><li>'Can you verify the support contract status for the printer?'</li><li>"Quel est le statut du contrat de support de l'ordinateur portable?"</li></ul> |
| greet-good_bye | <ul><li>'stop'</li><li>'exit'</li><li>'A plus tard'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8897 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Decius/botpress_Vaganet_new_model")
# Run inference
preds = model("L'ordinateur portable est-il libre à l'instant?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 8.3494 | 13 |
| Label | Training Sample Count |
|:----------------------------------|:----------------------|
| greet-hi | 3 |
| greet-who_are_you | 5 |
| greet-good_bye | 3 |
| request_work_order | 5 |
| request_asset_utilization_report | 5 |
| request_asset_information | 5 |
| request_asset_location | 5 |
| check_asset_availability | 5 |
| report_asset_issue | 5 |
| request_maintenance_schedule | 5 |
| request_asset_depreciation_info | 5 |
| request_asset_status | 5 |
| request_asset_replacement | 5 |
| request_asset_inventory | 5 |
| request_asset_upgrades | 5 |
| check_asset_warranty | 5 |
| request_asset_transfer | 5 |
| check_asset_history | 5 |
| request_asset_repair | 5 |
| check_asset_functionality | 5 |
| request_asset_maintenance_history | 5 |
| request_asset_replacement_info | 5 |
| request_asset_cost_analysis | 5 |
| check_asset_compatibility | 5 |
| request_asset_efficiency | 5 |
| check_asset_certification | 5 |
| check_asset_location_update | 5 |
| request_asset_inventory_update | 5 |
| check_asset_usage_history | 5 |
| check_asset_upgrades | 5 |
| check_asset_security_status | 5 |
| check_asset_support_contract | 5 |
| request_asset_funding_status | 5 |
| check_asset_maintenance_history | 5 |
### Training Hyperparameters
- batch_size: (3, 3)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.0936 | - |
| 0.0011 | 10 | 0.274 | - |
| 0.0022 | 20 | 0.1268 | - |
| 0.0034 | 30 | 0.1438 | - |
| 0.0045 | 40 | 0.0725 | - |
| 0.0056 | 50 | 0.0259 | - |
| 0.0067 | 60 | 0.0466 | - |
| 0.0079 | 70 | 0.094 | - |
| 0.0090 | 80 | 0.1856 | - |
| 0.0101 | 90 | 0.0431 | - |
| 0.0112 | 100 | 0.1027 | - |
| 0.0123 | 110 | 0.182 | - |
| 0.0135 | 120 | 0.1333 | - |
| 0.0146 | 130 | 0.0891 | - |
| 0.0157 | 140 | 0.1653 | - |
| 0.0168 | 150 | 0.1792 | - |
| 0.0180 | 160 | 0.0491 | - |
| 0.0191 | 170 | 0.1128 | - |
| 0.0202 | 180 | 0.2192 | - |
| 0.0213 | 190 | 0.1811 | - |
| 0.0224 | 200 | 0.0586 | - |
| 0.0236 | 210 | 0.0236 | - |
| 0.0247 | 220 | 0.0593 | - |
| 0.0258 | 230 | 0.1262 | - |
| 0.0269 | 240 | 0.1643 | - |
| 0.0280 | 250 | 0.0971 | - |
| 0.0292 | 260 | 0.03 | - |
| 0.0303 | 270 | 0.2052 | - |
| 0.0314 | 280 | 0.1401 | - |
| 0.0325 | 290 | 0.0235 | - |
| 0.0337 | 300 | 0.1132 | - |
| 0.0348 | 310 | 0.2753 | - |
| 0.0359 | 320 | 0.0491 | - |
| 0.0370 | 330 | 0.1902 | - |
| 0.0381 | 340 | 0.1143 | - |
| 0.0393 | 350 | 0.0437 | - |
| 0.0404 | 360 | 0.1819 | - |
| 0.0415 | 370 | 0.0539 | - |
| 0.0426 | 380 | 0.0256 | - |
| 0.0438 | 390 | 0.1176 | - |
| 0.0449 | 400 | 0.0329 | - |
| 0.0460 | 410 | 0.0766 | - |
| 0.0471 | 420 | 0.0577 | - |
| 0.0482 | 430 | 0.1098 | - |
| 0.0494 | 440 | 0.0153 | - |
| 0.0505 | 450 | 0.0418 | - |
| 0.0516 | 460 | 0.0127 | - |
| 0.0527 | 470 | 0.0593 | - |
| 0.0539 | 480 | 0.0538 | - |
| 0.0550 | 490 | 0.0405 | - |
| 0.0561 | 500 | 0.0062 | - |
| 0.0572 | 510 | 0.0994 | - |
| 0.0583 | 520 | 0.0926 | - |
| 0.0595 | 530 | 0.0923 | - |
| 0.0606 | 540 | 0.0715 | - |
| 0.0617 | 550 | 0.0653 | - |
| 0.0628 | 560 | 0.1877 | - |
| 0.0640 | 570 | 0.0069 | - |
| 0.0651 | 580 | 0.0085 | - |
| 0.0662 | 590 | 0.0714 | - |
| 0.0673 | 600 | 0.0845 | - |
| 0.0684 | 610 | 0.019 | - |
| 0.0696 | 620 | 0.0186 | - |
| 0.0707 | 630 | 0.1454 | - |
| 0.0718 | 640 | 0.0121 | - |
| 0.0729 | 650 | 0.1109 | - |
| 0.0740 | 660 | 0.0108 | - |
| 0.0752 | 670 | 0.0224 | - |
| 0.0763 | 680 | 0.0024 | - |
| 0.0774 | 690 | 0.0313 | - |
| 0.0785 | 700 | 0.0335 | - |
| 0.0797 | 710 | 0.0097 | - |
| 0.0808 | 720 | 0.1469 | - |
| 0.0819 | 730 | 0.0245 | - |
| 0.0830 | 740 | 0.0264 | - |
| 0.0841 | 750 | 0.0121 | - |
| 0.0853 | 760 | 0.1861 | - |
| 0.0864 | 770 | 0.0648 | - |
| 0.0875 | 780 | 0.005 | - |
| 0.0886 | 790 | 0.0033 | - |
| 0.0898 | 800 | 0.0543 | - |
| 0.0909 | 810 | 0.0129 | - |
| 0.0920 | 820 | 0.0049 | - |
| 0.0931 | 830 | 0.0389 | - |
| 0.0942 | 840 | 0.0271 | - |
| 0.0954 | 850 | 0.0298 | - |
| 0.0965 | 860 | 0.0434 | - |
| 0.0976 | 870 | 0.0359 | - |
| 0.0987 | 880 | 0.0019 | - |
| 0.0999 | 890 | 0.034 | - |
| 0.1010 | 900 | 0.0171 | - |
| 0.1021 | 910 | 0.0033 | - |
| 0.1032 | 920 | 0.021 | - |
| 0.1043 | 930 | 0.0479 | - |
| 0.1055 | 940 | 0.0147 | - |
| 0.1066 | 950 | 0.0101 | - |
| 0.1077 | 960 | 0.0315 | - |
| 0.1088 | 970 | 0.0064 | - |
| 0.1100 | 980 | 0.0146 | - |
| 0.1111 | 990 | 0.0062 | - |
| 0.1122 | 1000 | 0.0063 | - |
| 0.1133 | 1010 | 0.0171 | - |
| 0.1144 | 1020 | 0.0474 | - |
| 0.1156 | 1030 | 0.003 | - |
| 0.1167 | 1040 | 0.0059 | - |
| 0.1178 | 1050 | 0.0063 | - |
| 0.1189 | 1060 | 0.0107 | - |
| 0.1200 | 1070 | 0.0344 | - |
| 0.1212 | 1080 | 0.0181 | - |
| 0.1223 | 1090 | 0.0354 | - |
| 0.1234 | 1100 | 0.0164 | - |
| 0.1245 | 1110 | 0.0181 | - |
| 0.1257 | 1120 | 0.0025 | - |
| 0.1268 | 1130 | 0.0087 | - |
| 0.1279 | 1140 | 0.0132 | - |
| 0.1290 | 1150 | 0.0047 | - |
| 0.1301 | 1160 | 0.0058 | - |
| 0.1313 | 1170 | 0.0069 | - |
| 0.1324 | 1180 | 0.0004 | - |
| 0.1335 | 1190 | 0.002 | - |
| 0.1346 | 1200 | 0.001 | - |
| 0.1358 | 1210 | 0.0017 | - |
| 0.1369 | 1220 | 0.0021 | - |
| 0.1380 | 1230 | 0.0283 | - |
| 0.1391 | 1240 | 0.014 | - |
| 0.1402 | 1250 | 0.0029 | - |
| 0.1414 | 1260 | 0.0044 | - |
| 0.1425 | 1270 | 0.0084 | - |
| 0.1436 | 1280 | 0.0186 | - |
| 0.1447 | 1290 | 0.0021 | - |
| 0.1459 | 1300 | 0.0022 | - |
| 0.1470 | 1310 | 0.0403 | - |
| 0.1481 | 1320 | 0.0385 | - |
| 0.1492 | 1330 | 0.0007 | - |
| 0.1503 | 1340 | 0.004 | - |
| 0.1515 | 1350 | 0.0058 | - |
| 0.1526 | 1360 | 0.0027 | - |
| 0.1537 | 1370 | 0.0037 | - |
| 0.1548 | 1380 | 0.0024 | - |
| 0.1560 | 1390 | 0.0008 | - |
| 0.1571 | 1400 | 0.0654 | - |
| 0.1582 | 1410 | 0.022 | - |
| 0.1593 | 1420 | 0.1093 | - |
| 0.1604 | 1430 | 0.0074 | - |
| 0.1616 | 1440 | 0.0482 | - |
| 0.1627 | 1450 | 0.0065 | - |
| 0.1638 | 1460 | 0.0061 | - |
| 0.1649 | 1470 | 0.0085 | - |
| 0.1660 | 1480 | 0.2072 | - |
| 0.1672 | 1490 | 0.0018 | - |
| 0.1683 | 1500 | 0.0039 | - |
| 0.1694 | 1510 | 0.0044 | - |
| 0.1705 | 1520 | 0.0008 | - |
| 0.1717 | 1530 | 0.0033 | - |
| 0.1728 | 1540 | 0.0025 | - |
| 0.1739 | 1550 | 0.0015 | - |
| 0.1750 | 1560 | 0.0014 | - |
| 0.1761 | 1570 | 0.0009 | - |
| 0.1773 | 1580 | 0.0181 | - |
| 0.1784 | 1590 | 0.0161 | - |
| 0.1795 | 1600 | 0.0689 | - |
| 0.1806 | 1610 | 0.0035 | - |
| 0.1818 | 1620 | 0.008 | - |
| 0.1829 | 1630 | 0.0043 | - |
| 0.1840 | 1640 | 0.0114 | - |
| 0.1851 | 1650 | 0.0804 | - |
| 0.1862 | 1660 | 0.0073 | - |
| 0.1874 | 1670 | 0.0043 | - |
| 0.1885 | 1680 | 0.0038 | - |
| 0.1896 | 1690 | 0.0041 | - |
| 0.1907 | 1700 | 0.0027 | - |
| 0.1919 | 1710 | 0.0014 | - |
| 0.1930 | 1720 | 0.001 | - |
| 0.1941 | 1730 | 0.0013 | - |
| 0.1952 | 1740 | 0.0137 | - |
| 0.1963 | 1750 | 0.0006 | - |
| 0.1975 | 1760 | 0.0025 | - |
| 0.1986 | 1770 | 0.0098 | - |
| 0.1997 | 1780 | 0.0079 | - |
| 0.2008 | 1790 | 0.0031 | - |
| 0.2020 | 1800 | 0.0029 | - |
| 0.2031 | 1810 | 0.0117 | - |
| 0.2042 | 1820 | 0.0012 | - |
| 0.2053 | 1830 | 0.0108 | - |
| 0.2064 | 1840 | 0.0882 | - |
| 0.2076 | 1850 | 0.0073 | - |
| 0.2087 | 1860 | 0.0116 | - |
| 0.2098 | 1870 | 0.1268 | - |
| 0.2109 | 1880 | 0.0107 | - |
| 0.2120 | 1890 | 0.0047 | - |
| 0.2132 | 1900 | 0.0353 | - |
| 0.2143 | 1910 | 0.0018 | - |
| 0.2154 | 1920 | 0.2147 | - |
| 0.2165 | 1930 | 0.0041 | - |
| 0.2177 | 1940 | 0.0202 | - |
| 0.2188 | 1950 | 0.006 | - |
| 0.2199 | 1960 | 0.0007 | - |
| 0.2210 | 1970 | 0.0013 | - |
| 0.2221 | 1980 | 0.0026 | - |
| 0.2233 | 1990 | 0.0522 | - |
| 0.2244 | 2000 | 0.0586 | - |
| 0.2255 | 2010 | 0.0045 | - |
| 0.2266 | 2020 | 0.003 | - |
| 0.2278 | 2030 | 0.0008 | - |
| 0.2289 | 2040 | 0.0029 | - |
| 0.2300 | 2050 | 0.0024 | - |
| 0.2311 | 2060 | 0.0019 | - |
| 0.2322 | 2070 | 0.0015 | - |
| 0.2334 | 2080 | 0.0012 | - |
| 0.2345 | 2090 | 0.0012 | - |
| 0.2356 | 2100 | 0.0002 | - |
| 0.2367 | 2110 | 0.0033 | - |
| 0.2379 | 2120 | 0.001 | - |
| 0.2390 | 2130 | 0.0015 | - |
| 0.2401 | 2140 | 0.0094 | - |
| 0.2412 | 2150 | 0.0025 | - |
| 0.2423 | 2160 | 0.0021 | - |
| 0.2435 | 2170 | 0.0015 | - |
| 0.2446 | 2180 | 0.0063 | - |
| 0.2457 | 2190 | 0.0051 | - |
| 0.2468 | 2200 | 0.0038 | - |
| 0.2480 | 2210 | 0.0036 | - |
| 0.2491 | 2220 | 0.0007 | - |
| 0.2502 | 2230 | 0.0038 | - |
| 0.2513 | 2240 | 0.0545 | - |
| 0.2524 | 2250 | 0.0027 | - |
| 0.2536 | 2260 | 0.0032 | - |
| 0.2547 | 2270 | 0.0017 | - |
| 0.2558 | 2280 | 0.0073 | - |
| 0.2569 | 2290 | 0.0036 | - |
| 0.2581 | 2300 | 0.001 | - |
| 0.2592 | 2310 | 0.0005 | - |
| 0.2603 | 2320 | 0.0024 | - |
| 0.2614 | 2330 | 0.0019 | - |
| 0.2625 | 2340 | 0.0113 | - |
| 0.2637 | 2350 | 0.1202 | - |
| 0.2648 | 2360 | 0.2482 | - |
| 0.2659 | 2370 | 0.0047 | - |
| 0.2670 | 2380 | 0.0056 | - |
| 0.2681 | 2390 | 0.0037 | - |
| 0.2693 | 2400 | 0.0031 | - |
| 0.2704 | 2410 | 0.3122 | - |
| 0.2715 | 2420 | 0.0371 | - |
| 0.2726 | 2430 | 0.0017 | - |
| 0.2738 | 2440 | 0.0109 | - |
| 0.2749 | 2450 | 0.0011 | - |
| 0.2760 | 2460 | 0.0017 | - |
| 0.2771 | 2470 | 0.0033 | - |
| 0.2782 | 2480 | 0.007 | - |
| 0.2794 | 2490 | 0.0031 | - |
| 0.2805 | 2500 | 0.0076 | - |
| 0.2816 | 2510 | 0.0008 | - |
| 0.2827 | 2520 | 0.0091 | - |
| 0.2839 | 2530 | 0.0023 | - |
| 0.2850 | 2540 | 0.0057 | - |
| 0.2861 | 2550 | 0.0049 | - |
| 0.2872 | 2560 | 0.002 | - |
| 0.2883 | 2570 | 0.2556 | - |
| 0.2895 | 2580 | 0.0065 | - |
| 0.2906 | 2590 | 0.0073 | - |
| 0.2917 | 2600 | 0.0031 | - |
| 0.2928 | 2610 | 0.0047 | - |
| 0.2940 | 2620 | 0.0009 | - |
| 0.2951 | 2630 | 0.005 | - |
| 0.2962 | 2640 | 0.2062 | - |
| 0.2973 | 2650 | 0.0102 | - |
| 0.2984 | 2660 | 0.0006 | - |
| 0.2996 | 2670 | 0.0028 | - |
| 0.3007 | 2680 | 0.0015 | - |
| 0.3018 | 2690 | 0.0054 | - |
| 0.3029 | 2700 | 0.0038 | - |
| 0.3041 | 2710 | 0.0014 | - |
| 0.3052 | 2720 | 0.0021 | - |
| 0.3063 | 2730 | 0.0041 | - |
| 0.3074 | 2740 | 0.0058 | - |
| 0.3085 | 2750 | 0.0021 | - |
| 0.3097 | 2760 | 0.3074 | - |
| 0.3108 | 2770 | 0.2003 | - |
| 0.3119 | 2780 | 0.0015 | - |
| 0.3130 | 2790 | 0.001 | - |
| 0.3141 | 2800 | 0.0026 | - |
| 0.3153 | 2810 | 0.0014 | - |
| 0.3164 | 2820 | 0.0071 | - |
| 0.3175 | 2830 | 0.0016 | - |
| 0.3186 | 2840 | 0.004 | - |
| 0.3198 | 2850 | 0.0083 | - |
| 0.3209 | 2860 | 0.0003 | - |
| 0.3220 | 2870 | 0.0023 | - |
| 0.3231 | 2880 | 0.0035 | - |
| 0.3242 | 2890 | 0.0005 | - |
| 0.3254 | 2900 | 0.0046 | - |
| 0.3265 | 2910 | 0.001 | - |
| 0.3276 | 2920 | 0.0022 | - |
| 0.3287 | 2930 | 0.0013 | - |
| 0.3299 | 2940 | 0.0017 | - |
| 0.3310 | 2950 | 0.0013 | - |
| 0.3321 | 2960 | 0.0059 | - |
| 0.3332 | 2970 | 0.0015 | - |
| 0.3343 | 2980 | 0.0027 | - |
| 0.3355 | 2990 | 0.0054 | - |
| 0.3366 | 3000 | 0.0026 | - |
| 0.3377 | 3010 | 0.0045 | - |
| 0.3388 | 3020 | 0.0061 | - |
| 0.3400 | 3030 | 0.0147 | - |
| 0.3411 | 3040 | 0.0035 | - |
| 0.3422 | 3050 | 0.002 | - |
| 0.3433 | 3060 | 0.0012 | - |
| 0.3444 | 3070 | 0.0058 | - |
| 0.3456 | 3080 | 0.0014 | - |
| 0.3467 | 3090 | 0.0001 | - |
| 0.3478 | 3100 | 0.3175 | - |
| 0.3489 | 3110 | 0.0022 | - |
| 0.3501 | 3120 | 0.0001 | - |
| 0.3512 | 3130 | 0.0013 | - |
| 0.3523 | 3140 | 0.0016 | - |
| 0.3534 | 3150 | 0.0005 | - |
| 0.3545 | 3160 | 0.1234 | - |
| 0.3557 | 3170 | 0.0005 | - |
| 0.3568 | 3180 | 0.0002 | - |
| 0.3579 | 3190 | 0.0083 | - |
| 0.3590 | 3200 | 0.0011 | - |
| 0.3601 | 3210 | 0.0009 | - |
| 0.3613 | 3220 | 0.0007 | - |
| 0.3624 | 3230 | 0.0006 | - |
| 0.3635 | 3240 | 0.0016 | - |
| 0.3646 | 3250 | 0.0009 | - |
| 0.3658 | 3260 | 0.0045 | - |
| 0.3669 | 3270 | 0.0136 | - |
| 0.3680 | 3280 | 0.0055 | - |
| 0.3691 | 3290 | 0.0004 | - |
| 0.3702 | 3300 | 0.0014 | - |
| 0.3714 | 3310 | 0.001 | - |
| 0.3725 | 3320 | 0.0007 | - |
| 0.3736 | 3330 | 0.0002 | - |
| 0.3747 | 3340 | 0.0009 | - |
| 0.3759 | 3350 | 0.0003 | - |
| 0.3770 | 3360 | 0.0006 | - |
| 0.3781 | 3370 | 0.0006 | - |
| 0.3792 | 3380 | 0.0009 | - |
| 0.3803 | 3390 | 0.0008 | - |
| 0.3815 | 3400 | 0.0028 | - |
| 0.3826 | 3410 | 0.0039 | - |
| 0.3837 | 3420 | 0.0017 | - |
| 0.3848 | 3430 | 0.001 | - |
| 0.3860 | 3440 | 0.0012 | - |
| 0.3871 | 3450 | 0.0212 | - |
| 0.3882 | 3460 | 0.0017 | - |
| 0.3893 | 3470 | 0.0018 | - |
| 0.3904 | 3480 | 0.0007 | - |
| 0.3916 | 3490 | 0.0012 | - |
| 0.3927 | 3500 | 0.0002 | - |
| 0.3938 | 3510 | 0.0836 | - |
| 0.3949 | 3520 | 0.0054 | - |
| 0.3961 | 3530 | 0.0016 | - |
| 0.3972 | 3540 | 0.0003 | - |
| 0.3983 | 3550 | 0.0038 | - |
| 0.3994 | 3560 | 0.0024 | - |
| 0.4005 | 3570 | 0.0057 | - |
| 0.4017 | 3580 | 0.0804 | - |
| 0.4028 | 3590 | 0.0006 | - |
| 0.4039 | 3600 | 0.0008 | - |
| 0.4050 | 3610 | 0.0007 | - |
| 0.4061 | 3620 | 0.0002 | - |
| 0.4073 | 3630 | 0.0001 | - |
| 0.4084 | 3640 | 0.0032 | - |
| 0.4095 | 3650 | 0.0015 | - |
| 0.4106 | 3660 | 0.0002 | - |
| 0.4118 | 3670 | 0.0015 | - |
| 0.4129 | 3680 | 0.0014 | - |
| 0.4140 | 3690 | 0.0001 | - |
| 0.4151 | 3700 | 0.0033 | - |
| 0.4162 | 3710 | 0.0021 | - |
| 0.4174 | 3720 | 0.0009 | - |
| 0.4185 | 3730 | 0.0012 | - |
| 0.4196 | 3740 | 0.0022 | - |
| 0.4207 | 3750 | 0.0007 | - |
| 0.4219 | 3760 | 0.0034 | - |
| 0.4230 | 3770 | 0.0087 | - |
| 0.4241 | 3780 | 0.0032 | - |
| 0.4252 | 3790 | 0.0017 | - |
| 0.4263 | 3800 | 0.0042 | - |
| 0.4275 | 3810 | 0.0011 | - |
| 0.4286 | 3820 | 0.0034 | - |
| 0.4297 | 3830 | 0.0003 | - |
| 0.4308 | 3840 | 0.002 | - |
| 0.4320 | 3850 | 0.0055 | - |
| 0.4331 | 3860 | 0.0013 | - |
| 0.4342 | 3870 | 0.005 | - |
| 0.4353 | 3880 | 0.0011 | - |
| 0.4364 | 3890 | 0.0054 | - |
| 0.4376 | 3900 | 0.0013 | - |
| 0.4387 | 3910 | 0.0014 | - |
| 0.4398 | 3920 | 0.0006 | - |
| 0.4409 | 3930 | 0.0017 | - |
| 0.4421 | 3940 | 0.0015 | - |
| 0.4432 | 3950 | 0.0006 | - |
| 0.4443 | 3960 | 0.0029 | - |
| 0.4454 | 3970 | 0.0013 | - |
| 0.4465 | 3980 | 0.0021 | - |
| 0.4477 | 3990 | 0.001 | - |
| 0.4488 | 4000 | 0.0023 | - |
| 0.4499 | 4010 | 0.0016 | - |
| 0.4510 | 4020 | 0.0025 | - |
| 0.4521 | 4030 | 0.0008 | - |
| 0.4533 | 4040 | 0.0014 | - |
| 0.4544 | 4050 | 0.0006 | - |
| 0.4555 | 4060 | 0.0013 | - |
| 0.4566 | 4070 | 0.0002 | - |
| 0.4578 | 4080 | 0.0021 | - |
| 0.4589 | 4090 | 0.0309 | - |
| 0.4600 | 4100 | 0.0026 | - |
| 0.4611 | 4110 | 0.0011 | - |
| 0.4622 | 4120 | 0.0025 | - |
| 0.4634 | 4130 | 0.0021 | - |
| 0.4645 | 4140 | 0.003 | - |
| 0.4656 | 4150 | 0.0026 | - |
| 0.4667 | 4160 | 0.0001 | - |
| 0.4679 | 4170 | 0.0019 | - |
| 0.4690 | 4180 | 0.1177 | - |
| 0.4701 | 4190 | 0.0008 | - |
| 0.4712 | 4200 | 0.0016 | - |
| 0.4723 | 4210 | 0.0013 | - |
| 0.4735 | 4220 | 0.0007 | - |
| 0.4746 | 4230 | 0.0021 | - |
| 0.4757 | 4240 | 0.0004 | - |
| 0.4768 | 4250 | 0.3107 | - |
| 0.4780 | 4260 | 0.0014 | - |
| 0.4791 | 4270 | 0.0089 | - |
| 0.4802 | 4280 | 0.0165 | - |
| 0.4813 | 4290 | 0.0091 | - |
| 0.4824 | 4300 | 0.0007 | - |
| 0.4836 | 4310 | 0.0044 | - |
| 0.4847 | 4320 | 0.3151 | - |
| 0.4858 | 4330 | 0.0002 | - |
| 0.4869 | 4340 | 0.0058 | - |
| 0.4881 | 4350 | 0.0014 | - |
| 0.4892 | 4360 | 0.0657 | - |
| 0.4903 | 4370 | 0.0015 | - |
| 0.4914 | 4380 | 0.0015 | - |
| 0.4925 | 4390 | 0.0029 | - |
| 0.4937 | 4400 | 0.0046 | - |
| 0.4948 | 4410 | 0.2228 | - |
| 0.4959 | 4420 | 0.0011 | - |
| 0.4970 | 4430 | 0.0135 | - |
| 0.4981 | 4440 | 0.0017 | - |
| 0.4993 | 4450 | 0.0007 | - |
| 0.5004 | 4460 | 0.0022 | - |
| 0.5015 | 4470 | 0.001 | - |
| 0.5026 | 4480 | 0.0044 | - |
| 0.5038 | 4490 | 0.0002 | - |
| 0.5049 | 4500 | 0.0002 | - |
| 0.5060 | 4510 | 0.0013 | - |
| 0.5071 | 4520 | 0.0042 | - |
| 0.5082 | 4530 | 0.0007 | - |
| 0.5094 | 4540 | 0.0012 | - |
| 0.5105 | 4550 | 0.0009 | - |
| 0.5116 | 4560 | 0.001 | - |
| 0.5127 | 4570 | 0.0028 | - |
| 0.5139 | 4580 | 0.0016 | - |
| 0.5150 | 4590 | 0.0017 | - |
| 0.5161 | 4600 | 0.0029 | - |
| 0.5172 | 4610 | 0.0003 | - |
| 0.5183 | 4620 | 0.001 | - |
| 0.5195 | 4630 | 0.0008 | - |
| 0.5206 | 4640 | 0.0011 | - |
| 0.5217 | 4650 | 0.0003 | - |
| 0.5228 | 4660 | 0.0011 | - |
| 0.5240 | 4670 | 0.0001 | - |
| 0.5251 | 4680 | 0.0002 | - |
| 0.5262 | 4690 | 0.4625 | - |
| 0.5273 | 4700 | 0.0015 | - |
| 0.5284 | 4710 | 0.0003 | - |
| 0.5296 | 4720 | 0.0006 | - |
| 0.5307 | 4730 | 0.0 | - |
| 0.5318 | 4740 | 0.001 | - |
| 0.5329 | 4750 | 0.0003 | - |
| 0.5341 | 4760 | 0.0004 | - |
| 0.5352 | 4770 | 0.0031 | - |
| 0.5363 | 4780 | 0.3211 | - |
| 0.5374 | 4790 | 0.0009 | - |
| 0.5385 | 4800 | 0.0012 | - |
| 0.5397 | 4810 | 0.0003 | - |
| 0.5408 | 4820 | 0.0008 | - |
| 0.5419 | 4830 | 0.0017 | - |
| 0.5430 | 4840 | 0.0002 | - |
| 0.5441 | 4850 | 0.0004 | - |
| 0.5453 | 4860 | 0.0004 | - |
| 0.5464 | 4870 | 0.0039 | - |
| 0.5475 | 4880 | 0.0023 | - |
| 0.5486 | 4890 | 0.003 | - |
| 0.5498 | 4900 | 0.0004 | - |
| 0.5509 | 4910 | 0.0002 | - |
| 0.5520 | 4920 | 0.0019 | - |
| 0.5531 | 4930 | 0.0003 | - |
| 0.5542 | 4940 | 0.0006 | - |
| 0.5554 | 4950 | 0.0032 | - |
| 0.5565 | 4960 | 0.0005 | - |
| 0.5576 | 4970 | 0.0005 | - |
| 0.5587 | 4980 | 0.0002 | - |
| 0.5599 | 4990 | 0.0005 | - |
| 0.5610 | 5000 | 0.0004 | - |
| 0.5621 | 5010 | 0.0001 | - |
| 0.5632 | 5020 | 0.0021 | - |
| 0.5643 | 5030 | 0.0002 | - |
| 0.5655 | 5040 | 0.0006 | - |
| 0.5666 | 5050 | 0.0009 | - |
| 0.5677 | 5060 | 0.0016 | - |
| 0.5688 | 5070 | 0.0002 | - |
| 0.5700 | 5080 | 0.0002 | - |
| 0.5711 | 5090 | 0.0013 | - |
| 0.5722 | 5100 | 0.0202 | - |
| 0.5733 | 5110 | 0.0087 | - |
| 0.5744 | 5120 | 0.0004 | - |
| 0.5756 | 5130 | 0.0002 | - |
| 0.5767 | 5140 | 0.0002 | - |
| 0.5778 | 5150 | 0.0009 | - |
| 0.5789 | 5160 | 0.0011 | - |
| 0.5801 | 5170 | 0.0005 | - |
| 0.5812 | 5180 | 0.0015 | - |
| 0.5823 | 5190 | 0.0018 | - |
| 0.5834 | 5200 | 0.0013 | - |
| 0.5845 | 5210 | 0.1693 | - |
| 0.5857 | 5220 | 0.0107 | - |
| 0.5868 | 5230 | 0.0005 | - |
| 0.5879 | 5240 | 0.0003 | - |
| 0.5890 | 5250 | 0.0005 | - |
| 0.5901 | 5260 | 0.0002 | - |
| 0.5913 | 5270 | 0.0001 | - |
| 0.5924 | 5280 | 0.0006 | - |
| 0.5935 | 5290 | 0.0022 | - |
| 0.5946 | 5300 | 0.0004 | - |
| 0.5958 | 5310 | 0.0013 | - |
| 0.5969 | 5320 | 0.001 | - |
| 0.5980 | 5330 | 0.0005 | - |
| 0.5991 | 5340 | 0.0026 | - |
| 0.6002 | 5350 | 0.0007 | - |
| 0.6014 | 5360 | 0.0002 | - |
| 0.6025 | 5370 | 0.0052 | - |
| 0.6036 | 5380 | 0.0006 | - |
| 0.6047 | 5390 | 0.0003 | - |
| 0.6059 | 5400 | 0.0012 | - |
| 0.6070 | 5410 | 0.0003 | - |
| 0.6081 | 5420 | 0.0019 | - |
| 0.6092 | 5430 | 0.0003 | - |
| 0.6103 | 5440 | 0.0013 | - |
| 0.6115 | 5450 | 0.0002 | - |
| 0.6126 | 5460 | 0.0019 | - |
| 0.6137 | 5470 | 0.0003 | - |
| 0.6148 | 5480 | 0.0003 | - |
| 0.6160 | 5490 | 0.0011 | - |
| 0.6171 | 5500 | 0.001 | - |
| 0.6182 | 5510 | 0.0068 | - |
| 0.6193 | 5520 | 0.0016 | - |
| 0.6204 | 5530 | 0.0001 | - |
| 0.6216 | 5540 | 0.0019 | - |
| 0.6227 | 5550 | 0.0015 | - |
| 0.6238 | 5560 | 0.0119 | - |
| 0.6249 | 5570 | 0.0009 | - |
| 0.6261 | 5580 | 0.0002 | - |
| 0.6272 | 5590 | 0.0017 | - |
| 0.6283 | 5600 | 0.0001 | - |
| 0.6294 | 5610 | 0.001 | - |
| 0.6305 | 5620 | 0.0005 | - |
| 0.6317 | 5630 | 0.0004 | - |
| 0.6328 | 5640 | 0.0155 | - |
| 0.6339 | 5650 | 0.0028 | - |
| 0.6350 | 5660 | 0.0006 | - |
| 0.6361 | 5670 | 0.0007 | - |
| 0.6373 | 5680 | 0.0025 | - |
| 0.6384 | 5690 | 0.0007 | - |
| 0.6395 | 5700 | 0.0002 | - |
| 0.6406 | 5710 | 0.0011 | - |
| 0.6418 | 5720 | 0.0045 | - |
| 0.6429 | 5730 | 0.0004 | - |
| 0.6440 | 5740 | 0.0005 | - |
| 0.6451 | 5750 | 0.0009 | - |
| 0.6462 | 5760 | 0.0001 | - |
| 0.6474 | 5770 | 0.0005 | - |
| 0.6485 | 5780 | 0.0002 | - |
| 0.6496 | 5790 | 0.0031 | - |
| 0.6507 | 5800 | 0.0016 | - |
| 0.6519 | 5810 | 0.0002 | - |
| 0.6530 | 5820 | 0.0026 | - |
| 0.6541 | 5830 | 0.0004 | - |
| 0.6552 | 5840 | 0.0002 | - |
| 0.6563 | 5850 | 0.0013 | - |
| 0.6575 | 5860 | 0.0012 | - |
| 0.6586 | 5870 | 0.0015 | - |
| 0.6597 | 5880 | 0.0027 | - |
| 0.6608 | 5890 | 0.0045 | - |
| 0.6620 | 5900 | 0.001 | - |
| 0.6631 | 5910 | 0.0035 | - |
| 0.6642 | 5920 | 0.0026 | - |
| 0.6653 | 5930 | 0.0002 | - |
| 0.6664 | 5940 | 0.0003 | - |
| 0.6676 | 5950 | 0.009 | - |
| 0.6687 | 5960 | 0.0012 | - |
| 0.6698 | 5970 | 0.0001 | - |
| 0.6709 | 5980 | 0.0011 | - |
| 0.6721 | 5990 | 0.0021 | - |
| 0.6732 | 6000 | 0.0021 | - |
| 0.6743 | 6010 | 0.0001 | - |
| 0.6754 | 6020 | 0.0008 | - |
| 0.6765 | 6030 | 0.0005 | - |
| 0.6777 | 6040 | 0.0042 | - |
| 0.6788 | 6050 | 0.0033 | - |
| 0.6799 | 6060 | 0.0001 | - |
| 0.6810 | 6070 | 0.0001 | - |
| 0.6821 | 6080 | 0.3232 | - |
| 0.6833 | 6090 | 0.0002 | - |
| 0.6844 | 6100 | 0.0022 | - |
| 0.6855 | 6110 | 0.0003 | - |
| 0.6866 | 6120 | 0.0003 | - |
| 0.6878 | 6130 | 0.0004 | - |
| 0.6889 | 6140 | 0.0004 | - |
| 0.6900 | 6150 | 0.0001 | - |
| 0.6911 | 6160 | 0.0005 | - |
| 0.6922 | 6170 | 0.0002 | - |
| 0.6934 | 6180 | 0.0024 | - |
| 0.6945 | 6190 | 0.0005 | - |
| 0.6956 | 6200 | 0.0018 | - |
| 0.6967 | 6210 | 0.0019 | - |
| 0.6979 | 6220 | 0.0008 | - |
| 0.6990 | 6230 | 0.0004 | - |
| 0.7001 | 6240 | 0.0061 | - |
| 0.7012 | 6250 | 0.0022 | - |
| 0.7023 | 6260 | 0.002 | - |
| 0.7035 | 6270 | 0.0025 | - |
| 0.7046 | 6280 | 0.0007 | - |
| 0.7057 | 6290 | 0.001 | - |
| 0.7068 | 6300 | 0.0005 | - |
| 0.7080 | 6310 | 0.0001 | - |
| 0.7091 | 6320 | 0.0002 | - |
| 0.7102 | 6330 | 0.0017 | - |
| 0.7113 | 6340 | 0.0019 | - |
| 0.7124 | 6350 | 0.0005 | - |
| 0.7136 | 6360 | 0.0011 | - |
| 0.7147 | 6370 | 0.0005 | - |
| 0.7158 | 6380 | 0.0002 | - |
| 0.7169 | 6390 | 0.0001 | - |
| 0.7181 | 6400 | 0.0008 | - |
| 0.7192 | 6410 | 0.0005 | - |
| 0.7203 | 6420 | 0.0004 | - |
| 0.7214 | 6430 | 0.0015 | - |
| 0.7225 | 6440 | 0.0012 | - |
| 0.7237 | 6450 | 0.0002 | - |
| 0.7248 | 6460 | 0.0008 | - |
| 0.7259 | 6470 | 0.0149 | - |
| 0.7270 | 6480 | 0.0022 | - |
| 0.7281 | 6490 | 0.0003 | - |
| 0.7293 | 6500 | 0.0116 | - |
| 0.7304 | 6510 | 0.0003 | - |
| 0.7315 | 6520 | 0.003 | - |
| 0.7326 | 6530 | 0.0009 | - |
| 0.7338 | 6540 | 0.0001 | - |
| 0.7349 | 6550 | 0.0009 | - |
| 0.7360 | 6560 | 0.0001 | - |
| 0.7371 | 6570 | 0.0002 | - |
| 0.7382 | 6580 | 0.0011 | - |
| 0.7394 | 6590 | 0.0052 | - |
| 0.7405 | 6600 | 0.0006 | - |
| 0.7416 | 6610 | 0.3256 | - |
| 0.7427 | 6620 | 0.0015 | - |
| 0.7439 | 6630 | 0.0006 | - |
| 0.7450 | 6640 | 0.0009 | - |
| 0.7461 | 6650 | 0.0004 | - |
| 0.7472 | 6660 | 0.0002 | - |
| 0.7483 | 6670 | 0.0002 | - |
| 0.7495 | 6680 | 0.0004 | - |
| 0.7506 | 6690 | 0.0016 | - |
| 0.7517 | 6700 | 0.0014 | - |
| 0.7528 | 6710 | 0.0005 | - |
| 0.7540 | 6720 | 0.0009 | - |
| 0.7551 | 6730 | 0.0017 | - |
| 0.7562 | 6740 | 0.0014 | - |
| 0.7573 | 6750 | 0.0022 | - |
| 0.7584 | 6760 | 0.002 | - |
| 0.7596 | 6770 | 0.0045 | - |
| 0.7607 | 6780 | 0.0002 | - |
| 0.7618 | 6790 | 0.0006 | - |
| 0.7629 | 6800 | 0.0004 | - |
| 0.7641 | 6810 | 0.0001 | - |
| 0.7652 | 6820 | 0.0005 | - |
| 0.7663 | 6830 | 0.0003 | - |
| 0.7674 | 6840 | 0.0002 | - |
| 0.7685 | 6850 | 0.0003 | - |
| 0.7697 | 6860 | 0.0006 | - |
| 0.7708 | 6870 | 0.0031 | - |
| 0.7719 | 6880 | 0.0009 | - |
| 0.7730 | 6890 | 0.0026 | - |
| 0.7742 | 6900 | 0.0011 | - |
| 0.7753 | 6910 | 0.0005 | - |
| 0.7764 | 6920 | 0.0003 | - |
| 0.7775 | 6930 | 0.004 | - |
| 0.7786 | 6940 | 0.0018 | - |
| 0.7798 | 6950 | 0.0007 | - |
| 0.7809 | 6960 | 0.0017 | - |
| 0.7820 | 6970 | 0.0003 | - |
| 0.7831 | 6980 | 0.0006 | - |
| 0.7842 | 6990 | 0.0001 | - |
| 0.7854 | 7000 | 0.0009 | - |
| 0.7865 | 7010 | 0.0025 | - |
| 0.7876 | 7020 | 0.0015 | - |
| 0.7887 | 7030 | 0.0015 | - |
| 0.7899 | 7040 | 0.0002 | - |
| 0.7910 | 7050 | 0.0005 | - |
| 0.7921 | 7060 | 0.0004 | - |
| 0.7932 | 7070 | 0.0033 | - |
| 0.7943 | 7080 | 0.0004 | - |
| 0.7955 | 7090 | 0.0007 | - |
| 0.7966 | 7100 | 0.0004 | - |
| 0.7977 | 7110 | 0.0013 | - |
| 0.7988 | 7120 | 0.0004 | - |
| 0.8000 | 7130 | 0.0062 | - |
| 0.8011 | 7140 | 0.0005 | - |
| 0.8022 | 7150 | 0.0057 | - |
| 0.8033 | 7160 | 0.0016 | - |
| 0.8044 | 7170 | 0.0001 | - |
| 0.8056 | 7180 | 0.0 | - |
| 0.8067 | 7190 | 0.0004 | - |
| 0.8078 | 7200 | 0.0034 | - |
| 0.8089 | 7210 | 0.0014 | - |
| 0.8101 | 7220 | 0.001 | - |
| 0.8112 | 7230 | 0.0004 | - |
| 0.8123 | 7240 | 0.0002 | - |
| 0.8134 | 7250 | 0.0001 | - |
| 0.8145 | 7260 | 0.0006 | - |
| 0.8157 | 7270 | 0.0013 | - |
| 0.8168 | 7280 | 0.0002 | - |
| 0.8179 | 7290 | 0.0009 | - |
| 0.8190 | 7300 | 0.0028 | - |
| 0.8202 | 7310 | 0.0004 | - |
| 0.8213 | 7320 | 0.0016 | - |
| 0.8224 | 7330 | 0.0005 | - |
| 0.8235 | 7340 | 0.0003 | - |
| 0.8246 | 7350 | 0.0001 | - |
| 0.8258 | 7360 | 0.0031 | - |
| 0.8269 | 7370 | 0.0013 | - |
| 0.8280 | 7380 | 0.0008 | - |
| 0.8291 | 7390 | 0.0002 | - |
| 0.8302 | 7400 | 0.0011 | - |
| 0.8314 | 7410 | 0.0035 | - |
| 0.8325 | 7420 | 0.0002 | - |
| 0.8336 | 7430 | 0.0002 | - |
| 0.8347 | 7440 | 0.0002 | - |
| 0.8359 | 7450 | 0.0018 | - |
| 0.8370 | 7460 | 0.0116 | - |
| 0.8381 | 7470 | 0.0004 | - |
| 0.8392 | 7480 | 0.0005 | - |
| 0.8403 | 7490 | 0.0006 | - |
| 0.8415 | 7500 | 0.0004 | - |
| 0.8426 | 7510 | 0.0011 | - |
| 0.8437 | 7520 | 0.0003 | - |
| 0.8448 | 7530 | 0.0017 | - |
| 0.8460 | 7540 | 0.0006 | - |
| 0.8471 | 7550 | 0.0001 | - |
| 0.8482 | 7560 | 0.0012 | - |
| 0.8493 | 7570 | 0.0011 | - |
| 0.8504 | 7580 | 0.0003 | - |
| 0.8516 | 7590 | 0.0005 | - |
| 0.8527 | 7600 | 0.0002 | - |
| 0.8538 | 7610 | 0.0014 | - |
| 0.8549 | 7620 | 0.0001 | - |
| 0.8561 | 7630 | 0.002 | - |
| 0.8572 | 7640 | 0.003 | - |
| 0.8583 | 7650 | 0.0 | - |
| 0.8594 | 7660 | 0.0011 | - |
| 0.8605 | 7670 | 0.0018 | - |
| 0.8617 | 7680 | 0.0039 | - |
| 0.8628 | 7690 | 0.0036 | - |
| 0.8639 | 7700 | 0.0015 | - |
| 0.8650 | 7710 | 0.0011 | - |
| 0.8662 | 7720 | 0.0015 | - |
| 0.8673 | 7730 | 0.0007 | - |
| 0.8684 | 7740 | 0.3175 | - |
| 0.8695 | 7750 | 0.0001 | - |
| 0.8706 | 7760 | 0.0003 | - |
| 0.8718 | 7770 | 0.0002 | - |
| 0.8729 | 7780 | 0.0003 | - |
| 0.8740 | 7790 | 0.0029 | - |
| 0.8751 | 7800 | 0.0005 | - |
| 0.8762 | 7810 | 0.0004 | - |
| 0.8774 | 7820 | 0.0004 | - |
| 0.8785 | 7830 | 0.0002 | - |
| 0.8796 | 7840 | 0.0003 | - |
| 0.8807 | 7850 | 0.0002 | - |
| 0.8819 | 7860 | 0.0006 | - |
| 0.8830 | 7870 | 0.0006 | - |
| 0.8841 | 7880 | 0.0003 | - |
| 0.8852 | 7890 | 0.0009 | - |
| 0.8863 | 7900 | 0.0003 | - |
| 0.8875 | 7910 | 0.0003 | - |
| 0.8886 | 7920 | 0.0012 | - |
| 0.8897 | 7930 | 0.0009 | - |
| 0.8908 | 7940 | 0.0025 | - |
| 0.8920 | 7950 | 0.0002 | - |
| 0.8931 | 7960 | 0.0007 | - |
| 0.8942 | 7970 | 0.0005 | - |
| 0.8953 | 7980 | 0.0004 | - |
| 0.8964 | 7990 | 0.0001 | - |
| 0.8976 | 8000 | 0.0001 | - |
| 0.8987 | 8010 | 0.0004 | - |
| 0.8998 | 8020 | 0.0001 | - |
| 0.9009 | 8030 | 0.0011 | - |
| 0.9021 | 8040 | 0.0007 | - |
| 0.9032 | 8050 | 0.0003 | - |
| 0.9043 | 8060 | 0.0001 | - |
| 0.9054 | 8070 | 0.0017 | - |
| 0.9065 | 8080 | 0.0008 | - |
| 0.9077 | 8090 | 0.0022 | - |
| 0.9088 | 8100 | 0.0002 | - |
| 0.9099 | 8110 | 0.0008 | - |
| 0.9110 | 8120 | 0.0002 | - |
| 0.9122 | 8130 | 0.0007 | - |
| 0.9133 | 8140 | 0.0001 | - |
| 0.9144 | 8150 | 0.0008 | - |
| 0.9155 | 8160 | 0.0003 | - |
| 0.9166 | 8170 | 0.001 | - |
| 0.9178 | 8180 | 0.0005 | - |
| 0.9189 | 8190 | 0.0006 | - |
| 0.9200 | 8200 | 0.0003 | - |
| 0.9211 | 8210 | 0.0026 | - |
| 0.9222 | 8220 | 0.0014 | - |
| 0.9234 | 8230 | 0.0001 | - |
| 0.9245 | 8240 | 0.0018 | - |
| 0.9256 | 8250 | 0.0002 | - |
| 0.9267 | 8260 | 0.0003 | - |
| 0.9279 | 8270 | 0.0002 | - |
| 0.9290 | 8280 | 0.0011 | - |
| 0.9301 | 8290 | 0.0002 | - |
| 0.9312 | 8300 | 0.0004 | - |
| 0.9323 | 8310 | 0.0004 | - |
| 0.9335 | 8320 | 0.0007 | - |
| 0.9346 | 8330 | 0.0016 | - |
| 0.9357 | 8340 | 0.0006 | - |
| 0.9368 | 8350 | 0.0004 | - |
| 0.9380 | 8360 | 0.0008 | - |
| 0.9391 | 8370 | 0.0017 | - |
| 0.9402 | 8380 | 0.0004 | - |
| 0.9413 | 8390 | 0.0004 | - |
| 0.9424 | 8400 | 0.0001 | - |
| 0.9436 | 8410 | 0.0019 | - |
| 0.9447 | 8420 | 0.0019 | - |
| 0.9458 | 8430 | 0.0003 | - |
| 0.9469 | 8440 | 0.0005 | - |
| 0.9481 | 8450 | 0.0003 | - |
| 0.9492 | 8460 | 0.0002 | - |
| 0.9503 | 8470 | 0.3168 | - |
| 0.9514 | 8480 | 0.0008 | - |
| 0.9525 | 8490 | 0.0003 | - |
| 0.9537 | 8500 | 0.0001 | - |
| 0.9548 | 8510 | 0.0009 | - |
| 0.9559 | 8520 | 0.0001 | - |
| 0.9570 | 8530 | 0.0002 | - |
| 0.9582 | 8540 | 0.0007 | - |
| 0.9593 | 8550 | 0.0004 | - |
| 0.9604 | 8560 | 0.0005 | - |
| 0.9615 | 8570 | 0.0 | - |
| 0.9626 | 8580 | 0.0001 | - |
| 0.9638 | 8590 | 0.0009 | - |
| 0.9649 | 8600 | 0.0005 | - |
| 0.9660 | 8610 | 0.0015 | - |
| 0.9671 | 8620 | 0.0001 | - |
| 0.9682 | 8630 | 0.0009 | - |
| 0.9694 | 8640 | 0.0011 | - |
| 0.9705 | 8650 | 0.0006 | - |
| 0.9716 | 8660 | 0.0003 | - |
| 0.9727 | 8670 | 0.0001 | - |
| 0.9739 | 8680 | 0.0097 | - |
| 0.9750 | 8690 | 0.0007 | - |
| 0.9761 | 8700 | 0.0004 | - |
| 0.9772 | 8710 | 0.0015 | - |
| 0.9783 | 8720 | 0.0002 | - |
| 0.9795 | 8730 | 0.0007 | - |
| 0.9806 | 8740 | 0.0009 | - |
| 0.9817 | 8750 | 0.0002 | - |
| 0.9828 | 8760 | 0.0003 | - |
| 0.9840 | 8770 | 0.0001 | - |
| 0.9851 | 8780 | 0.0022 | - |
| 0.9862 | 8790 | 0.0001 | - |
| 0.9873 | 8800 | 0.0027 | - |
| 0.9884 | 8810 | 0.0021 | - |
| 0.9896 | 8820 | 0.0001 | - |
| 0.9907 | 8830 | 0.0008 | - |
| 0.9918 | 8840 | 0.0001 | - |
| 0.9929 | 8850 | 0.0001 | - |
| 0.9941 | 8860 | 0.0002 | - |
| 0.9952 | 8870 | 0.0004 | - |
| 0.9963 | 8880 | 0.0008 | - |
| 0.9974 | 8890 | 0.0006 | - |
| 0.9985 | 8900 | 0.0002 | - |
| 0.9997 | 8910 | 0.0006 | - |
| **1.0** | **8913** | **-** | **0.0244** |
| 1.0008 | 8920 | 0.0012 | - |
| 1.0019 | 8930 | 0.0002 | - |
| 1.0030 | 8940 | 0.0004 | - |
| 1.0042 | 8950 | 0.0004 | - |
| 1.0053 | 8960 | 0.0001 | - |
| 1.0064 | 8970 | 0.0004 | - |
| 1.0075 | 8980 | 0.0003 | - |
| 1.0086 | 8990 | 0.0002 | - |
| 1.0098 | 9000 | 0.0004 | - |
| 1.0109 | 9010 | 0.0003 | - |
| 1.0120 | 9020 | 0.0001 | - |
| 1.0131 | 9030 | 0.001 | - |
| 1.0142 | 9040 | 0.0002 | - |
| 1.0154 | 9050 | 0.0003 | - |
| 1.0165 | 9060 | 0.0001 | - |
| 1.0176 | 9070 | 0.0002 | - |
| 1.0187 | 9080 | 0.0 | - |
| 1.0199 | 9090 | 0.0001 | - |
| 1.0210 | 9100 | 0.0009 | - |
| 1.0221 | 9110 | 0.0004 | - |
| 1.0232 | 9120 | 0.0016 | - |
| 1.0243 | 9130 | 0.0026 | - |
| 1.0255 | 9140 | 0.0002 | - |
| 1.0266 | 9150 | 0.0007 | - |
| 1.0277 | 9160 | 0.0001 | - |
| 1.0288 | 9170 | 0.0002 | - |
| 1.0300 | 9180 | 0.0 | - |
| 1.0311 | 9190 | 0.0002 | - |
| 1.0322 | 9200 | 0.0003 | - |
| 1.0333 | 9210 | 0.0004 | - |
| 1.0344 | 9220 | 0.0009 | - |
| 1.0356 | 9230 | 0.0001 | - |
| 1.0367 | 9240 | 0.0024 | - |
| 1.0378 | 9250 | 0.0023 | - |
| 1.0389 | 9260 | 0.0002 | - |
| 1.0401 | 9270 | 0.0003 | - |
| 1.0412 | 9280 | 0.0001 | - |
| 1.0423 | 9290 | 0.0003 | - |
| 1.0434 | 9300 | 0.0 | - |
| 1.0445 | 9310 | 0.0003 | - |
| 1.0457 | 9320 | 0.0001 | - |
| 1.0468 | 9330 | 0.0001 | - |
| 1.0479 | 9340 | 0.0004 | - |
| 1.0490 | 9350 | 0.0028 | - |
| 1.0502 | 9360 | 0.0003 | - |
| 1.0513 | 9370 | 0.0007 | - |
| 1.0524 | 9380 | 0.0001 | - |
| 1.0535 | 9390 | 0.0001 | - |
| 1.0546 | 9400 | 0.0001 | - |
| 1.0558 | 9410 | 0.0002 | - |
| 1.0569 | 9420 | 0.0 | - |
| 1.0580 | 9430 | 0.0002 | - |
| 1.0591 | 9440 | 0.0001 | - |
| 1.0602 | 9450 | 0.0076 | - |
| 1.0614 | 9460 | 0.0 | - |
| 1.0625 | 9470 | 0.0007 | - |
| 1.0636 | 9480 | 0.0002 | - |
| 1.0647 | 9490 | 0.0001 | - |
| 1.0659 | 9500 | 0.0005 | - |
| 1.0670 | 9510 | 0.0032 | - |
| 1.0681 | 9520 | 0.0008 | - |
| 1.0692 | 9530 | 0.0001 | - |
| 1.0703 | 9540 | 0.0003 | - |
| 1.0715 | 9550 | 0.0001 | - |
| 1.0726 | 9560 | 0.0002 | - |
| 1.0737 | 9570 | 0.0005 | - |
| 1.0748 | 9580 | 0.0002 | - |
| 1.0760 | 9590 | 0.0001 | - |
| 1.0771 | 9600 | 0.0003 | - |
| 1.0782 | 9610 | 0.0001 | - |
| 1.0793 | 9620 | 0.0007 | - |
| 1.0804 | 9630 | 0.0001 | - |
| 1.0816 | 9640 | 0.0001 | - |
| 1.0827 | 9650 | 0.0016 | - |
| 1.0838 | 9660 | 0.0012 | - |
| 1.0849 | 9670 | 0.0001 | - |
| 1.0861 | 9680 | 0.0002 | - |
| 1.0872 | 9690 | 0.0002 | - |
| 1.0883 | 9700 | 0.0001 | - |
| 1.0894 | 9710 | 0.0002 | - |
| 1.0905 | 9720 | 0.0001 | - |
| 1.0917 | 9730 | 0.0001 | - |
| 1.0928 | 9740 | 0.0003 | - |
| 1.0939 | 9750 | 0.0004 | - |
| 1.0950 | 9760 | 0.0002 | - |
| 1.0962 | 9770 | 0.0001 | - |
| 1.0973 | 9780 | 0.0001 | - |
| 1.0984 | 9790 | 0.0003 | - |
| 1.0995 | 9800 | 0.0008 | - |
| 1.1006 | 9810 | 0.0016 | - |
| 1.1018 | 9820 | 0.0001 | - |
| 1.1029 | 9830 | 0.0029 | - |
| 1.1040 | 9840 | 0.0005 | - |
| 1.1051 | 9850 | 0.0006 | - |
| 1.1062 | 9860 | 0.0018 | - |
| 1.1074 | 9870 | 0.0005 | - |
| 1.1085 | 9880 | 0.0004 | - |
| 1.1096 | 9890 | 0.0001 | - |
| 1.1107 | 9900 | 0.0002 | - |
| 1.1119 | 9910 | 0.0003 | - |
| 1.1130 | 9920 | 0.0007 | - |
| 1.1141 | 9930 | 0.0006 | - |
| 1.1152 | 9940 | 0.0004 | - |
| 1.1163 | 9950 | 0.0001 | - |
| 1.1175 | 9960 | 0.0001 | - |
| 1.1186 | 9970 | 0.0008 | - |
| 1.1197 | 9980 | 0.0 | - |
| 1.1208 | 9990 | 0.0004 | - |
| 1.1220 | 10000 | 0.0002 | - |
| 1.1231 | 10010 | 0.0002 | - |
| 1.1242 | 10020 | 0.0 | - |
| 1.1253 | 10030 | 0.0003 | - |
| 1.1264 | 10040 | 0.0008 | - |
| 1.1276 | 10050 | 0.0005 | - |
| 1.1287 | 10060 | 0.0069 | - |
| 1.1298 | 10070 | 0.0004 | - |
| 1.1309 | 10080 | 0.0038 | - |
| 1.1321 | 10090 | 0.0004 | - |
| 1.1332 | 10100 | 0.0001 | - |
| 1.1343 | 10110 | 0.0001 | - |
| 1.1354 | 10120 | 0.0001 | - |
| 1.1365 | 10130 | 0.0027 | - |
| 1.1377 | 10140 | 0.0009 | - |
| 1.1388 | 10150 | 0.0002 | - |
| 1.1399 | 10160 | 0.0004 | - |
| 1.1410 | 10170 | 0.0005 | - |
| 1.1422 | 10180 | 0.0012 | - |
| 1.1433 | 10190 | 0.0002 | - |
| 1.1444 | 10200 | 0.0003 | - |
| 1.1455 | 10210 | 0.0074 | - |
| 1.1466 | 10220 | 0.0001 | - |
| 1.1478 | 10230 | 0.001 | - |
| 1.1489 | 10240 | 0.0001 | - |
| 1.1500 | 10250 | 0.0002 | - |
| 1.1511 | 10260 | 0.0001 | - |
| 1.1522 | 10270 | 0.0001 | - |
| 1.1534 | 10280 | 0.0003 | - |
| 1.1545 | 10290 | 0.0001 | - |
| 1.1556 | 10300 | 0.0001 | - |
| 1.1567 | 10310 | 0.0182 | - |
| 1.1579 | 10320 | 0.0002 | - |
| 1.1590 | 10330 | 0.0023 | - |
| 1.1601 | 10340 | 0.0002 | - |
| 1.1612 | 10350 | 0.0005 | - |
| 1.1623 | 10360 | 0.0005 | - |
| 1.1635 | 10370 | 0.0001 | - |
| 1.1646 | 10380 | 0.0014 | - |
| 1.1657 | 10390 | 0.0001 | - |
| 1.1668 | 10400 | 0.0014 | - |
| 1.1680 | 10410 | 0.0001 | - |
| 1.1691 | 10420 | 0.0004 | - |
| 1.1702 | 10430 | 0.0001 | - |
| 1.1713 | 10440 | 0.0008 | - |
| 1.1724 | 10450 | 0.3055 | - |
| 1.1736 | 10460 | 0.0 | - |
| 1.1747 | 10470 | 0.0001 | - |
| 1.1758 | 10480 | 0.0007 | - |
| 1.1769 | 10490 | 0.0001 | - |
| 1.1781 | 10500 | 0.0001 | - |
| 1.1792 | 10510 | 0.0001 | - |
| 1.1803 | 10520 | 0.0 | - |
| 1.1814 | 10530 | 0.0001 | - |
| 1.1825 | 10540 | 0.0 | - |
| 1.1837 | 10550 | 0.0009 | - |
| 1.1848 | 10560 | 0.0001 | - |
| 1.1859 | 10570 | 0.0002 | - |
| 1.1870 | 10580 | 0.0001 | - |
| 1.1882 | 10590 | 0.0003 | - |
| 1.1893 | 10600 | 0.0002 | - |
| 1.1904 | 10610 | 0.0004 | - |
| 1.1915 | 10620 | 0.0002 | - |
| 1.1926 | 10630 | 0.0001 | - |
| 1.1938 | 10640 | 0.0002 | - |
| 1.1949 | 10650 | 0.0006 | - |
| 1.1960 | 10660 | 0.0032 | - |
| 1.1971 | 10670 | 0.0002 | - |
| 1.1982 | 10680 | 0.0004 | - |
| 1.1994 | 10690 | 0.0001 | - |
| 1.2005 | 10700 | 0.0004 | - |
| 1.2016 | 10710 | 0.0001 | - |
| 1.2027 | 10720 | 0.0001 | - |
| 1.2039 | 10730 | 0.0002 | - |
| 1.2050 | 10740 | 0.0007 | - |
| 1.2061 | 10750 | 0.0031 | - |
| 1.2072 | 10760 | 0.0002 | - |
| 1.2083 | 10770 | 0.0002 | - |
| 1.2095 | 10780 | 0.0001 | - |
| 1.2106 | 10790 | 0.0007 | - |
| 1.2117 | 10800 | 0.0001 | - |
| 1.2128 | 10810 | 0.0005 | - |
| 1.2140 | 10820 | 0.0 | - |
| 1.2151 | 10830 | 0.0011 | - |
| 1.2162 | 10840 | 0.0001 | - |
| 1.2173 | 10850 | 0.0004 | - |
| 1.2184 | 10860 | 0.0002 | - |
| 1.2196 | 10870 | 0.0001 | - |
| 1.2207 | 10880 | 0.0001 | - |
| 1.2218 | 10890 | 0.0005 | - |
| 1.2229 | 10900 | 0.0001 | - |
| 1.2241 | 10910 | 0.0001 | - |
| 1.2252 | 10920 | 0.001 | - |
| 1.2263 | 10930 | 0.0003 | - |
| 1.2274 | 10940 | 0.0004 | - |
| 1.2285 | 10950 | 0.0005 | - |
| 1.2297 | 10960 | 0.0001 | - |
| 1.2308 | 10970 | 0.0003 | - |
| 1.2319 | 10980 | 0.0001 | - |
| 1.2330 | 10990 | 0.0001 | - |
| 1.2342 | 11000 | 0.0001 | - |
| 1.2353 | 11010 | 0.0001 | - |
| 1.2364 | 11020 | 0.0005 | - |
| 1.2375 | 11030 | 0.0001 | - |
| 1.2386 | 11040 | 0.0001 | - |
| 1.2398 | 11050 | 0.0002 | - |
| 1.2409 | 11060 | 0.0002 | - |
| 1.2420 | 11070 | 0.0005 | - |
| 1.2431 | 11080 | 0.0001 | - |
| 1.2442 | 11090 | 0.0004 | - |
| 1.2454 | 11100 | 0.0001 | - |
| 1.2465 | 11110 | 0.0004 | - |
| 1.2476 | 11120 | 0.0007 | - |
| 1.2487 | 11130 | 0.0008 | - |
| 1.2499 | 11140 | 0.0001 | - |
| 1.2510 | 11150 | 0.0008 | - |
| 1.2521 | 11160 | 0.0002 | - |
| 1.2532 | 11170 | 0.0002 | - |
| 1.2543 | 11180 | 0.0003 | - |
| 1.2555 | 11190 | 0.0009 | - |
| 1.2566 | 11200 | 0.0008 | - |
| 1.2577 | 11210 | 0.0007 | - |
| 1.2588 | 11220 | 0.0002 | - |
| 1.2600 | 11230 | 0.0 | - |
| 1.2611 | 11240 | 0.3249 | - |
| 1.2622 | 11250 | 0.0 | - |
| 1.2633 | 11260 | 0.0001 | - |
| 1.2644 | 11270 | 0.0002 | - |
| 1.2656 | 11280 | 0.0019 | - |
| 1.2667 | 11290 | 0.0001 | - |
| 1.2678 | 11300 | 0.0 | - |
| 1.2689 | 11310 | 0.0 | - |
| 1.2701 | 11320 | 0.0001 | - |
| 1.2712 | 11330 | 0.0011 | - |
| 1.2723 | 11340 | 0.0001 | - |
| 1.2734 | 11350 | 0.0001 | - |
| 1.2745 | 11360 | 0.0002 | - |
| 1.2757 | 11370 | 0.0001 | - |
| 1.2768 | 11380 | 0.0002 | - |
| 1.2779 | 11390 | 0.0004 | - |
| 1.2790 | 11400 | 0.0 | - |
| 1.2802 | 11410 | 0.0 | - |
| 1.2813 | 11420 | 0.0001 | - |
| 1.2824 | 11430 | 0.0004 | - |
| 1.2835 | 11440 | 0.0001 | - |
| 1.2846 | 11450 | 0.0006 | - |
| 1.2858 | 11460 | 0.0002 | - |
| 1.2869 | 11470 | 0.0004 | - |
| 1.2880 | 11480 | 0.0001 | - |
| 1.2891 | 11490 | 0.0 | - |
| 1.2903 | 11500 | 0.0001 | - |
| 1.2914 | 11510 | 0.0003 | - |
| 1.2925 | 11520 | 0.0001 | - |
| 1.2936 | 11530 | 0.0002 | - |
| 1.2947 | 11540 | 0.0003 | - |
| 1.2959 | 11550 | 0.0001 | - |
| 1.2970 | 11560 | 0.0001 | - |
| 1.2981 | 11570 | 0.0001 | - |
| 1.2992 | 11580 | 0.0003 | - |
| 1.3003 | 11590 | 0.0 | - |
| 1.3015 | 11600 | 0.0002 | - |
| 1.3026 | 11610 | 0.0001 | - |
| 1.3037 | 11620 | 0.0012 | - |
| 1.3048 | 11630 | 0.0002 | - |
| 1.3060 | 11640 | 0.0001 | - |
| 1.3071 | 11650 | 0.0001 | - |
| 1.3082 | 11660 | 0.0002 | - |
| 1.3093 | 11670 | 0.0001 | - |
| 1.3104 | 11680 | 0.0012 | - |
| 1.3116 | 11690 | 0.0002 | - |
| 1.3127 | 11700 | 0.0 | - |
| 1.3138 | 11710 | 0.0001 | - |
| 1.3149 | 11720 | 0.0002 | - |
| 1.3161 | 11730 | 0.0008 | - |
| 1.3172 | 11740 | 0.0001 | - |
| 1.3183 | 11750 | 0.0003 | - |
| 1.3194 | 11760 | 0.0001 | - |
| 1.3205 | 11770 | 0.0001 | - |
| 1.3217 | 11780 | 0.0004 | - |
| 1.3228 | 11790 | 0.0002 | - |
| 1.3239 | 11800 | 0.0005 | - |
| 1.3250 | 11810 | 0.0001 | - |
| 1.3262 | 11820 | 0.0001 | - |
| 1.3273 | 11830 | 0.0001 | - |
| 1.3284 | 11840 | 0.0001 | - |
| 1.3295 | 11850 | 0.0001 | - |
| 1.3306 | 11860 | 0.0001 | - |
| 1.3318 | 11870 | 0.0001 | - |
| 1.3329 | 11880 | 0.0002 | - |
| 1.3340 | 11890 | 0.0001 | - |
| 1.3351 | 11900 | 0.0 | - |
| 1.3363 | 11910 | 0.0006 | - |
| 1.3374 | 11920 | 0.0001 | - |
| 1.3385 | 11930 | 0.0001 | - |
| 1.3396 | 11940 | 0.0002 | - |
| 1.3407 | 11950 | 0.0003 | - |
| 1.3419 | 11960 | 0.0002 | - |
| 1.3430 | 11970 | 0.0001 | - |
| 1.3441 | 11980 | 0.0002 | - |
| 1.3452 | 11990 | 0.0011 | - |
| 1.3463 | 12000 | 0.0008 | - |
| 1.3475 | 12010 | 0.0 | - |
| 1.3486 | 12020 | 0.0003 | - |
| 1.3497 | 12030 | 0.0001 | - |
| 1.3508 | 12040 | 0.0003 | - |
| 1.3520 | 12050 | 0.0005 | - |
| 1.3531 | 12060 | 0.0001 | - |
| 1.3542 | 12070 | 0.0001 | - |
| 1.3553 | 12080 | 0.0001 | - |
| 1.3564 | 12090 | 0.0042 | - |
| 1.3576 | 12100 | 0.0008 | - |
| 1.3587 | 12110 | 0.0001 | - |
| 1.3598 | 12120 | 0.0001 | - |
| 1.3609 | 12130 | 0.0011 | - |
| 1.3621 | 12140 | 0.0002 | - |
| 1.3632 | 12150 | 0.0001 | - |
| 1.3643 | 12160 | 0.0015 | - |
| 1.3654 | 12170 | 0.0004 | - |
| 1.3665 | 12180 | 0.0001 | - |
| 1.3677 | 12190 | 0.0001 | - |
| 1.3688 | 12200 | 0.0001 | - |
| 1.3699 | 12210 | 0.0001 | - |
| 1.3710 | 12220 | 0.0 | - |
| 1.3722 | 12230 | 0.0001 | - |
| 1.3733 | 12240 | 0.0005 | - |
| 1.3744 | 12250 | 0.0002 | - |
| 1.3755 | 12260 | 0.0001 | - |
| 1.3766 | 12270 | 0.0003 | - |
| 1.3778 | 12280 | 0.0006 | - |
| 1.3789 | 12290 | 0.0002 | - |
| 1.3800 | 12300 | 0.0002 | - |
| 1.3811 | 12310 | 0.0002 | - |
| 1.3823 | 12320 | 0.0001 | - |
| 1.3834 | 12330 | 0.0001 | - |
| 1.3845 | 12340 | 0.0 | - |
| 1.3856 | 12350 | 0.0003 | - |
| 1.3867 | 12360 | 0.0001 | - |
| 1.3879 | 12370 | 0.0001 | - |
| 1.3890 | 12380 | 0.0001 | - |
| 1.3901 | 12390 | 0.0011 | - |
| 1.3912 | 12400 | 0.0001 | - |
| 1.3923 | 12410 | 0.0001 | - |
| 1.3935 | 12420 | 0.0001 | - |
| 1.3946 | 12430 | 0.0001 | - |
| 1.3957 | 12440 | 0.0001 | - |
| 1.3968 | 12450 | 0.0001 | - |
| 1.3980 | 12460 | 0.0001 | - |
| 1.3991 | 12470 | 0.0002 | - |
| 1.4002 | 12480 | 0.0002 | - |
| 1.4013 | 12490 | 0.0002 | - |
| 1.4024 | 12500 | 0.0001 | - |
| 1.4036 | 12510 | 0.0027 | - |
| 1.4047 | 12520 | 0.0001 | - |
| 1.4058 | 12530 | 0.0001 | - |
| 1.4069 | 12540 | 0.0003 | - |
| 1.4081 | 12550 | 0.0025 | - |
| 1.4092 | 12560 | 0.0001 | - |
| 1.4103 | 12570 | 0.0005 | - |
| 1.4114 | 12580 | 0.0002 | - |
| 1.4125 | 12590 | 0.0007 | - |
| 1.4137 | 12600 | 0.0001 | - |
| 1.4148 | 12610 | 0.0001 | - |
| 1.4159 | 12620 | 0.0001 | - |
| 1.4170 | 12630 | 0.0001 | - |
| 1.4182 | 12640 | 0.0011 | - |
| 1.4193 | 12650 | 0.0026 | - |
| 1.4204 | 12660 | 0.0002 | - |
| 1.4215 | 12670 | 0.0003 | - |
| 1.4226 | 12680 | 0.0 | - |
| 1.4238 | 12690 | 0.0001 | - |
| 1.4249 | 12700 | 0.0004 | - |
| 1.4260 | 12710 | 0.0002 | - |
| 1.4271 | 12720 | 0.0001 | - |
| 1.4283 | 12730 | 0.0002 | - |
| 1.4294 | 12740 | 0.0003 | - |
| 1.4305 | 12750 | 0.0012 | - |
| 1.4316 | 12760 | 0.0011 | - |
| 1.4327 | 12770 | 0.0006 | - |
| 1.4339 | 12780 | 0.0001 | - |
| 1.4350 | 12790 | 0.0 | - |
| 1.4361 | 12800 | 0.0001 | - |
| 1.4372 | 12810 | 0.0002 | - |
| 1.4383 | 12820 | 0.0003 | - |
| 1.4395 | 12830 | 0.0001 | - |
| 1.4406 | 12840 | 0.0001 | - |
| 1.4417 | 12850 | 0.0 | - |
| 1.4428 | 12860 | 0.0002 | - |
| 1.4440 | 12870 | 0.0 | - |
| 1.4451 | 12880 | 0.0004 | - |
| 1.4462 | 12890 | 0.0001 | - |
| 1.4473 | 12900 | 0.0015 | - |
| 1.4484 | 12910 | 0.0004 | - |
| 1.4496 | 12920 | 0.0001 | - |
| 1.4507 | 12930 | 0.0001 | - |
| 1.4518 | 12940 | 0.0001 | - |
| 1.4529 | 12950 | 0.0 | - |
| 1.4541 | 12960 | 0.0001 | - |
| 1.4552 | 12970 | 0.0002 | - |
| 1.4563 | 12980 | 0.0001 | - |
| 1.4574 | 12990 | 0.0001 | - |
| 1.4585 | 13000 | 0.0001 | - |
| 1.4597 | 13010 | 0.0007 | - |
| 1.4608 | 13020 | 0.0002 | - |
| 1.4619 | 13030 | 0.0003 | - |
| 1.4630 | 13040 | 0.0003 | - |
| 1.4642 | 13050 | 0.0001 | - |
| 1.4653 | 13060 | 0.0006 | - |
| 1.4664 | 13070 | 0.0025 | - |
| 1.4675 | 13080 | 0.0002 | - |
| 1.4686 | 13090 | 0.0005 | - |
| 1.4698 | 13100 | 0.0001 | - |
| 1.4709 | 13110 | 0.0 | - |
| 1.4720 | 13120 | 0.0002 | - |
| 1.4731 | 13130 | 0.0 | - |
| 1.4743 | 13140 | 0.0001 | - |
| 1.4754 | 13150 | 0.0001 | - |
| 1.4765 | 13160 | 0.0002 | - |
| 1.4776 | 13170 | 0.0 | - |
| 1.4787 | 13180 | 0.0001 | - |
| 1.4799 | 13190 | 0.0004 | - |
| 1.4810 | 13200 | 0.0 | - |
| 1.4821 | 13210 | 0.0008 | - |
| 1.4832 | 13220 | 0.0001 | - |
| 1.4843 | 13230 | 0.0001 | - |
| 1.4855 | 13240 | 0.0002 | - |
| 1.4866 | 13250 | 0.0018 | - |
| 1.4877 | 13260 | 0.0001 | - |
| 1.4888 | 13270 | 0.0003 | - |
| 1.4900 | 13280 | 0.0198 | - |
| 1.4911 | 13290 | 0.0007 | - |
| 1.4922 | 13300 | 0.0001 | - |
| 1.4933 | 13310 | 0.0008 | - |
| 1.4944 | 13320 | 0.0002 | - |
| 1.4956 | 13330 | 0.0006 | - |
| 1.4967 | 13340 | 0.0001 | - |
| 1.4978 | 13350 | 0.0001 | - |
| 1.4989 | 13360 | 0.0004 | - |
| 1.5001 | 13370 | 0.0001 | - |
| 1.5012 | 13380 | 0.0007 | - |
| 1.5023 | 13390 | 0.0001 | - |
| 1.5034 | 13400 | 0.0001 | - |
| 1.5045 | 13410 | 0.0 | - |
| 1.5057 | 13420 | 0.0005 | - |
| 1.5068 | 13430 | 0.0001 | - |
| 1.5079 | 13440 | 0.0002 | - |
| 1.5090 | 13450 | 0.0005 | - |
| 1.5102 | 13460 | 0.0011 | - |
| 1.5113 | 13470 | 0.0004 | - |
| 1.5124 | 13480 | 0.0001 | - |
| 1.5135 | 13490 | 0.0009 | - |
| 1.5146 | 13500 | 0.0002 | - |
| 1.5158 | 13510 | 0.0003 | - |
| 1.5169 | 13520 | 0.0001 | - |
| 1.5180 | 13530 | 0.0001 | - |
| 1.5191 | 13540 | 0.0002 | - |
| 1.5203 | 13550 | 0.0001 | - |
| 1.5214 | 13560 | 0.0 | - |
| 1.5225 | 13570 | 0.0 | - |
| 1.5236 | 13580 | 0.0 | - |
| 1.5247 | 13590 | 0.0019 | - |
| 1.5259 | 13600 | 0.004 | - |
| 1.5270 | 13610 | 0.0004 | - |
| 1.5281 | 13620 | 0.0001 | - |
| 1.5292 | 13630 | 0.0004 | - |
| 1.5303 | 13640 | 0.0001 | - |
| 1.5315 | 13650 | 0.0003 | - |
| 1.5326 | 13660 | 0.0001 | - |
| 1.5337 | 13670 | 0.0001 | - |
| 1.5348 | 13680 | 0.0002 | - |
| 1.5360 | 13690 | 0.0002 | - |
| 1.5371 | 13700 | 0.0001 | - |
| 1.5382 | 13710 | 0.0001 | - |
| 1.5393 | 13720 | 0.0001 | - |
| 1.5404 | 13730 | 0.0001 | - |
| 1.5416 | 13740 | 0.0001 | - |
| 1.5427 | 13750 | 0.0001 | - |
| 1.5438 | 13760 | 0.0003 | - |
| 1.5449 | 13770 | 0.0001 | - |
| 1.5461 | 13780 | 0.0001 | - |
| 1.5472 | 13790 | 0.0002 | - |
| 1.5483 | 13800 | 0.0 | - |
| 1.5494 | 13810 | 0.0 | - |
| 1.5505 | 13820 | 0.0001 | - |
| 1.5517 | 13830 | 0.0002 | - |
| 1.5528 | 13840 | 0.0001 | - |
| 1.5539 | 13850 | 0.0002 | - |
| 1.5550 | 13860 | 0.0004 | - |
| 1.5562 | 13870 | 0.0001 | - |
| 1.5573 | 13880 | 0.0 | - |
| 1.5584 | 13890 | 0.0002 | - |
| 1.5595 | 13900 | 0.0003 | - |
| 1.5606 | 13910 | 0.0001 | - |
| 1.5618 | 13920 | 0.0001 | - |
| 1.5629 | 13930 | 0.0001 | - |
| 1.5640 | 13940 | 0.0 | - |
| 1.5651 | 13950 | 0.0003 | - |
| 1.5663 | 13960 | 0.0001 | - |
| 1.5674 | 13970 | 0.0007 | - |
| 1.5685 | 13980 | 0.0003 | - |
| 1.5696 | 13990 | 0.0006 | - |
| 1.5707 | 14000 | 0.0001 | - |
| 1.5719 | 14010 | 0.0009 | - |
| 1.5730 | 14020 | 0.0001 | - |
| 1.5741 | 14030 | 0.3245 | - |
| 1.5752 | 14040 | 0.0003 | - |
| 1.5763 | 14050 | 0.0003 | - |
| 1.5775 | 14060 | 0.0001 | - |
| 1.5786 | 14070 | 0.0001 | - |
| 1.5797 | 14080 | 0.0001 | - |
| 1.5808 | 14090 | 0.0001 | - |
| 1.5820 | 14100 | 0.0 | - |
| 1.5831 | 14110 | 0.0001 | - |
| 1.5842 | 14120 | 0.0017 | - |
| 1.5853 | 14130 | 0.0001 | - |
| 1.5864 | 14140 | 0.0001 | - |
| 1.5876 | 14150 | 0.0002 | - |
| 1.5887 | 14160 | 0.0004 | - |
| 1.5898 | 14170 | 0.0001 | - |
| 1.5909 | 14180 | 0.0005 | - |
| 1.5921 | 14190 | 0.0002 | - |
| 1.5932 | 14200 | 0.0002 | - |
| 1.5943 | 14210 | 0.0134 | - |
| 1.5954 | 14220 | 0.0001 | - |
| 1.5965 | 14230 | 0.0005 | - |
| 1.5977 | 14240 | 0.0001 | - |
| 1.5988 | 14250 | 0.0 | - |
| 1.5999 | 14260 | 0.0001 | - |
| 1.6010 | 14270 | 0.0004 | - |
| 1.6022 | 14280 | 0.0004 | - |
| 1.6033 | 14290 | 0.0001 | - |
| 1.6044 | 14300 | 0.0001 | - |
| 1.6055 | 14310 | 0.0012 | - |
| 1.6066 | 14320 | 0.0003 | - |
| 1.6078 | 14330 | 0.0001 | - |
| 1.6089 | 14340 | 0.0001 | - |
| 1.6100 | 14350 | 0.0004 | - |
| 1.6111 | 14360 | 0.0012 | - |
| 1.6123 | 14370 | 0.0002 | - |
| 1.6134 | 14380 | 0.001 | - |
| 1.6145 | 14390 | 0.0002 | - |
| 1.6156 | 14400 | 0.0004 | - |
| 1.6167 | 14410 | 0.0001 | - |
| 1.6179 | 14420 | 0.0001 | - |
| 1.6190 | 14430 | 0.0024 | - |
| 1.6201 | 14440 | 0.0002 | - |
| 1.6212 | 14450 | 0.0001 | - |
| 1.6223 | 14460 | 0.0001 | - |
| 1.6235 | 14470 | 0.0002 | - |
| 1.6246 | 14480 | 0.0001 | - |
| 1.6257 | 14490 | 0.0 | - |
| 1.6268 | 14500 | 0.0 | - |
| 1.6280 | 14510 | 0.0 | - |
| 1.6291 | 14520 | 0.0002 | - |
| 1.6302 | 14530 | 0.0001 | - |
| 1.6313 | 14540 | 0.0005 | - |
| 1.6324 | 14550 | 0.0 | - |
| 1.6336 | 14560 | 0.0001 | - |
| 1.6347 | 14570 | 0.0001 | - |
| 1.6358 | 14580 | 0.0001 | - |
| 1.6369 | 14590 | 0.0 | - |
| 1.6381 | 14600 | 0.0002 | - |
| 1.6392 | 14610 | 0.0004 | - |
| 1.6403 | 14620 | 0.0001 | - |
| 1.6414 | 14630 | 0.0009 | - |
| 1.6425 | 14640 | 0.0001 | - |
| 1.6437 | 14650 | 0.0001 | - |
| 1.6448 | 14660 | 0.0001 | - |
| 1.6459 | 14670 | 0.0003 | - |
| 1.6470 | 14680 | 0.0003 | - |
| 1.6482 | 14690 | 0.0 | - |
| 1.6493 | 14700 | 0.0049 | - |
| 1.6504 | 14710 | 0.0001 | - |
| 1.6515 | 14720 | 0.0 | - |
| 1.6526 | 14730 | 0.0002 | - |
| 1.6538 | 14740 | 0.0001 | - |
| 1.6549 | 14750 | 0.0001 | - |
| 1.6560 | 14760 | 0.0001 | - |
| 1.6571 | 14770 | 0.0002 | - |
| 1.6583 | 14780 | 0.0001 | - |
| 1.6594 | 14790 | 0.0002 | - |
| 1.6605 | 14800 | 0.0 | - |
| 1.6616 | 14810 | 0.0001 | - |
| 1.6627 | 14820 | 0.0 | - |
| 1.6639 | 14830 | 0.0001 | - |
| 1.6650 | 14840 | 0.0001 | - |
| 1.6661 | 14850 | 0.0001 | - |
| 1.6672 | 14860 | 0.0001 | - |
| 1.6683 | 14870 | 0.0 | - |
| 1.6695 | 14880 | 0.0001 | - |
| 1.6706 | 14890 | 0.0002 | - |
| 1.6717 | 14900 | 0.0001 | - |
| 1.6728 | 14910 | 0.0002 | - |
| 1.6740 | 14920 | 0.0002 | - |
| 1.6751 | 14930 | 0.0 | - |
| 1.6762 | 14940 | 0.0 | - |
| 1.6773 | 14950 | 0.0004 | - |
| 1.6784 | 14960 | 0.0001 | - |
| 1.6796 | 14970 | 0.0003 | - |
| 1.6807 | 14980 | 0.0 | - |
| 1.6818 | 14990 | 0.0001 | - |
| 1.6829 | 15000 | 0.0001 | - |
| 1.6841 | 15010 | 0.0001 | - |
| 1.6852 | 15020 | 0.0001 | - |
| 1.6863 | 15030 | 0.0007 | - |
| 1.6874 | 15040 | 0.0005 | - |
| 1.6885 | 15050 | 0.0001 | - |
| 1.6897 | 15060 | 0.0001 | - |
| 1.6908 | 15070 | 0.0003 | - |
| 1.6919 | 15080 | 0.0001 | - |
| 1.6930 | 15090 | 0.0 | - |
| 1.6942 | 15100 | 0.0004 | - |
| 1.6953 | 15110 | 0.0042 | - |
| 1.6964 | 15120 | 0.0001 | - |
| 1.6975 | 15130 | 0.0005 | - |
| 1.6986 | 15140 | 0.0002 | - |
| 1.6998 | 15150 | 0.0003 | - |
| 1.7009 | 15160 | 0.0001 | - |
| 1.7020 | 15170 | 0.0001 | - |
| 1.7031 | 15180 | 0.0001 | - |
| 1.7043 | 15190 | 0.0004 | - |
| 1.7054 | 15200 | 0.0001 | - |
| 1.7065 | 15210 | 0.0003 | - |
| 1.7076 | 15220 | 0.0 | - |
| 1.7087 | 15230 | 0.0001 | - |
| 1.7099 | 15240 | 0.0001 | - |
| 1.7110 | 15250 | 0.0074 | - |
| 1.7121 | 15260 | 0.0012 | - |
| 1.7132 | 15270 | 0.3243 | - |
| 1.7143 | 15280 | 0.001 | - |
| 1.7155 | 15290 | 0.0 | - |
| 1.7166 | 15300 | 0.0 | - |
| 1.7177 | 15310 | 0.0001 | - |
| 1.7188 | 15320 | 0.0005 | - |
| 1.7200 | 15330 | 0.0001 | - |
| 1.7211 | 15340 | 0.0001 | - |
| 1.7222 | 15350 | 0.0002 | - |
| 1.7233 | 15360 | 0.0003 | - |
| 1.7244 | 15370 | 0.0 | - |
| 1.7256 | 15380 | 0.0 | - |
| 1.7267 | 15390 | 0.0002 | - |
| 1.7278 | 15400 | 0.0 | - |
| 1.7289 | 15410 | 0.0002 | - |
| 1.7301 | 15420 | 0.0001 | - |
| 1.7312 | 15430 | 0.0003 | - |
| 1.7323 | 15440 | 0.0001 | - |
| 1.7334 | 15450 | 0.0001 | - |
| 1.7345 | 15460 | 0.0 | - |
| 1.7357 | 15470 | 0.0001 | - |
| 1.7368 | 15480 | 0.0001 | - |
| 1.7379 | 15490 | 0.0001 | - |
| 1.7390 | 15500 | 0.0 | - |
| 1.7402 | 15510 | 0.0001 | - |
| 1.7413 | 15520 | 0.0002 | - |
| 1.7424 | 15530 | 0.3123 | - |
| 1.7435 | 15540 | 0.0001 | - |
| 1.7446 | 15550 | 0.0004 | - |
| 1.7458 | 15560 | 0.0002 | - |
| 1.7469 | 15570 | 0.0002 | - |
| 1.7480 | 15580 | 0.0003 | - |
| 1.7491 | 15590 | 0.0001 | - |
| 1.7503 | 15600 | 0.0001 | - |
| 1.7514 | 15610 | 0.0001 | - |
| 1.7525 | 15620 | 0.0002 | - |
| 1.7536 | 15630 | 0.0002 | - |
| 1.7547 | 15640 | 0.0001 | - |
| 1.7559 | 15650 | 0.0001 | - |
| 1.7570 | 15660 | 0.001 | - |
| 1.7581 | 15670 | 0.0001 | - |
| 1.7592 | 15680 | 0.0002 | - |
| 1.7604 | 15690 | 0.0002 | - |
| 1.7615 | 15700 | 0.0 | - |
| 1.7626 | 15710 | 0.0001 | - |
| 1.7637 | 15720 | 0.0003 | - |
| 1.7648 | 15730 | 0.0003 | - |
| 1.7660 | 15740 | 0.0001 | - |
| 1.7671 | 15750 | 0.0 | - |
| 1.7682 | 15760 | 0.0 | - |
| 1.7693 | 15770 | 0.0004 | - |
| 1.7704 | 15780 | 0.001 | - |
| 1.7716 | 15790 | 0.0002 | - |
| 1.7727 | 15800 | 0.0001 | - |
| 1.7738 | 15810 | 0.0003 | - |
| 1.7749 | 15820 | 0.0006 | - |
| 1.7761 | 15830 | 0.0002 | - |
| 1.7772 | 15840 | 0.0007 | - |
| 1.7783 | 15850 | 0.0013 | - |
| 1.7794 | 15860 | 0.0001 | - |
| 1.7805 | 15870 | 0.0001 | - |
| 1.7817 | 15880 | 0.0 | - |
| 1.7828 | 15890 | 0.0001 | - |
| 1.7839 | 15900 | 0.0005 | - |
| 1.7850 | 15910 | 0.0002 | - |
| 1.7862 | 15920 | 0.001 | - |
| 1.7873 | 15930 | 0.0002 | - |
| 1.7884 | 15940 | 0.0005 | - |
| 1.7895 | 15950 | 0.0002 | - |
| 1.7906 | 15960 | 0.0 | - |
| 1.7918 | 15970 | 0.0001 | - |
| 1.7929 | 15980 | 0.0002 | - |
| 1.7940 | 15990 | 0.0002 | - |
| 1.7951 | 16000 | 0.0001 | - |
| 1.7963 | 16010 | 0.0001 | - |
| 1.7974 | 16020 | 0.0025 | - |
| 1.7985 | 16030 | 0.0001 | - |
| 1.7996 | 16040 | 0.0001 | - |
| 1.8007 | 16050 | 0.0 | - |
| 1.8019 | 16060 | 0.0001 | - |
| 1.8030 | 16070 | 0.0001 | - |
| 1.8041 | 16080 | 0.0 | - |
| 1.8052 | 16090 | 0.0 | - |
| 1.8064 | 16100 | 0.0001 | - |
| 1.8075 | 16110 | 0.0 | - |
| 1.8086 | 16120 | 0.0001 | - |
| 1.8097 | 16130 | 0.0 | - |
| 1.8108 | 16140 | 0.0077 | - |
| 1.8120 | 16150 | 0.0002 | - |
| 1.8131 | 16160 | 0.0007 | - |
| 1.8142 | 16170 | 0.0003 | - |
| 1.8153 | 16180 | 0.0003 | - |
| 1.8164 | 16190 | 0.0 | - |
| 1.8176 | 16200 | 0.0002 | - |
| 1.8187 | 16210 | 0.0 | - |
| 1.8198 | 16220 | 0.0002 | - |
| 1.8209 | 16230 | 0.0001 | - |
| 1.8221 | 16240 | 0.0001 | - |
| 1.8232 | 16250 | 0.0001 | - |
| 1.8243 | 16260 | 0.0002 | - |
| 1.8254 | 16270 | 0.0 | - |
| 1.8265 | 16280 | 0.0 | - |
| 1.8277 | 16290 | 0.0 | - |
| 1.8288 | 16300 | 0.0003 | - |
| 1.8299 | 16310 | 0.0001 | - |
| 1.8310 | 16320 | 0.0 | - |
| 1.8322 | 16330 | 0.0002 | - |
| 1.8333 | 16340 | 0.0001 | - |
| 1.8344 | 16350 | 0.0001 | - |
| 1.8355 | 16360 | 0.0001 | - |
| 1.8366 | 16370 | 0.0002 | - |
| 1.8378 | 16380 | 0.0001 | - |
| 1.8389 | 16390 | 0.0002 | - |
| 1.8400 | 16400 | 0.0002 | - |
| 1.8411 | 16410 | 0.0 | - |
| 1.8423 | 16420 | 0.0001 | - |
| 1.8434 | 16430 | 0.0001 | - |
| 1.8445 | 16440 | 0.0001 | - |
| 1.8456 | 16450 | 0.0 | - |
| 1.8467 | 16460 | 0.0001 | - |
| 1.8479 | 16470 | 0.0001 | - |
| 1.8490 | 16480 | 0.0036 | - |
| 1.8501 | 16490 | 0.0001 | - |
| 1.8512 | 16500 | 0.0002 | - |
| 1.8524 | 16510 | 0.0003 | - |
| 1.8535 | 16520 | 0.0 | - |
| 1.8546 | 16530 | 0.0 | - |
| 1.8557 | 16540 | 0.0001 | - |
| 1.8568 | 16550 | 0.0001 | - |
| 1.8580 | 16560 | 0.0003 | - |
| 1.8591 | 16570 | 0.0001 | - |
| 1.8602 | 16580 | 0.0002 | - |
| 1.8613 | 16590 | 0.0008 | - |
| 1.8624 | 16600 | 0.0001 | - |
| 1.8636 | 16610 | 0.0001 | - |
| 1.8647 | 16620 | 0.0002 | - |
| 1.8658 | 16630 | 0.0 | - |
| 1.8669 | 16640 | 0.0001 | - |
| 1.8681 | 16650 | 0.0 | - |
| 1.8692 | 16660 | 0.0 | - |
| 1.8703 | 16670 | 0.0001 | - |
| 1.8714 | 16680 | 0.0001 | - |
| 1.8725 | 16690 | 0.0 | - |
| 1.8737 | 16700 | 0.0 | - |
| 1.8748 | 16710 | 0.0 | - |
| 1.8759 | 16720 | 0.0 | - |
| 1.8770 | 16730 | 0.0001 | - |
| 1.8782 | 16740 | 0.0001 | - |
| 1.8793 | 16750 | 0.0 | - |
| 1.8804 | 16760 | 0.0012 | - |
| 1.8815 | 16770 | 0.0001 | - |
| 1.8826 | 16780 | 0.0001 | - |
| 1.8838 | 16790 | 0.0001 | - |
| 1.8849 | 16800 | 0.0005 | - |
| 1.8860 | 16810 | 0.0 | - |
| 1.8871 | 16820 | 0.0002 | - |
| 1.8883 | 16830 | 0.0001 | - |
| 1.8894 | 16840 | 0.0 | - |
| 1.8905 | 16850 | 0.0001 | - |
| 1.8916 | 16860 | 0.0005 | - |
| 1.8927 | 16870 | 0.0 | - |
| 1.8939 | 16880 | 0.0002 | - |
| 1.8950 | 16890 | 0.0001 | - |
| 1.8961 | 16900 | 0.0001 | - |
| 1.8972 | 16910 | 0.0001 | - |
| 1.8984 | 16920 | 0.0001 | - |
| 1.8995 | 16930 | 0.0 | - |
| 1.9006 | 16940 | 0.0001 | - |
| 1.9017 | 16950 | 0.0001 | - |
| 1.9028 | 16960 | 0.0001 | - |
| 1.9040 | 16970 | 0.0001 | - |
| 1.9051 | 16980 | 0.0 | - |
| 1.9062 | 16990 | 0.0001 | - |
| 1.9073 | 17000 | 0.0001 | - |
| 1.9084 | 17010 | 0.0 | - |
| 1.9096 | 17020 | 0.0 | - |
| 1.9107 | 17030 | 0.0002 | - |
| 1.9118 | 17040 | 0.0025 | - |
| 1.9129 | 17050 | 0.0 | - |
| 1.9141 | 17060 | 0.0001 | - |
| 1.9152 | 17070 | 0.0001 | - |
| 1.9163 | 17080 | 0.0 | - |
| 1.9174 | 17090 | 0.0001 | - |
| 1.9185 | 17100 | 0.0 | - |
| 1.9197 | 17110 | 0.0006 | - |
| 1.9208 | 17120 | 0.0009 | - |
| 1.9219 | 17130 | 0.0007 | - |
| 1.9230 | 17140 | 0.0 | - |
| 1.9242 | 17150 | 0.0001 | - |
| 1.9253 | 17160 | 0.0 | - |
| 1.9264 | 17170 | 0.0 | - |
| 1.9275 | 17180 | 0.0001 | - |
| 1.9286 | 17190 | 0.0001 | - |
| 1.9298 | 17200 | 0.0001 | - |
| 1.9309 | 17210 | 0.0002 | - |
| 1.9320 | 17220 | 0.0 | - |
| 1.9331 | 17230 | 0.0011 | - |
| 1.9343 | 17240 | 0.0002 | - |
| 1.9354 | 17250 | 0.0001 | - |
| 1.9365 | 17260 | 0.0 | - |
| 1.9376 | 17270 | 0.0002 | - |
| 1.9387 | 17280 | 0.0002 | - |
| 1.9399 | 17290 | 0.0 | - |
| 1.9410 | 17300 | 0.0001 | - |
| 1.9421 | 17310 | 0.0001 | - |
| 1.9432 | 17320 | 0.0005 | - |
| 1.9444 | 17330 | 0.001 | - |
| 1.9455 | 17340 | 0.0 | - |
| 1.9466 | 17350 | 0.0 | - |
| 1.9477 | 17360 | 0.0 | - |
| 1.9488 | 17370 | 0.0 | - |
| 1.9500 | 17380 | 0.0 | - |
| 1.9511 | 17390 | 0.0004 | - |
| 1.9522 | 17400 | 0.0 | - |
| 1.9533 | 17410 | 0.0001 | - |
| 1.9544 | 17420 | 0.0003 | - |
| 1.9556 | 17430 | 0.0 | - |
| 1.9567 | 17440 | 0.0001 | - |
| 1.9578 | 17450 | 0.0001 | - |
| 1.9589 | 17460 | 0.0001 | - |
| 1.9601 | 17470 | 0.0001 | - |
| 1.9612 | 17480 | 0.0012 | - |
| 1.9623 | 17490 | 0.0009 | - |
| 1.9634 | 17500 | 0.0001 | - |
| 1.9645 | 17510 | 0.0002 | - |
| 1.9657 | 17520 | 0.0 | - |
| 1.9668 | 17530 | 0.0001 | - |
| 1.9679 | 17540 | 0.0001 | - |
| 1.9690 | 17550 | 0.0004 | - |
| 1.9702 | 17560 | 0.0002 | - |
| 1.9713 | 17570 | 0.0003 | - |
| 1.9724 | 17580 | 0.0001 | - |
| 1.9735 | 17590 | 0.0 | - |
| 1.9746 | 17600 | 0.0003 | - |
| 1.9758 | 17610 | 0.0002 | - |
| 1.9769 | 17620 | 0.0001 | - |
| 1.9780 | 17630 | 0.0002 | - |
| 1.9791 | 17640 | 0.0001 | - |
| 1.9803 | 17650 | 0.0 | - |
| 1.9814 | 17660 | 0.0001 | - |
| 1.9825 | 17670 | 0.0002 | - |
| 1.9836 | 17680 | 0.0 | - |
| 1.9847 | 17690 | 0.0 | - |
| 1.9859 | 17700 | 0.0002 | - |
| 1.9870 | 17710 | 0.0 | - |
| 1.9881 | 17720 | 0.0 | - |
| 1.9892 | 17730 | 0.0001 | - |
| 1.9904 | 17740 | 0.0003 | - |
| 1.9915 | 17750 | 0.0 | - |
| 1.9926 | 17760 | 0.0006 | - |
| 1.9937 | 17770 | 0.0001 | - |
| 1.9948 | 17780 | 0.0 | - |
| 1.9960 | 17790 | 0.0005 | - |
| 1.9971 | 17800 | 0.0009 | - |
| 1.9982 | 17810 | 0.0008 | - |
| 1.9993 | 17820 | 0.0001 | - |
| 2.0 | 17826 | - | 0.0282 |
| 2.0004 | 17830 | 0.0002 | - |
| 2.0016 | 17840 | 0.0001 | - |
| 2.0027 | 17850 | 0.0001 | - |
| 2.0038 | 17860 | 0.0 | - |
| 2.0049 | 17870 | 0.0001 | - |
| 2.0061 | 17880 | 0.0003 | - |
| 2.0072 | 17890 | 0.0002 | - |
| 2.0083 | 17900 | 0.0002 | - |
| 2.0094 | 17910 | 0.0001 | - |
| 2.0105 | 17920 | 0.0 | - |
| 2.0117 | 17930 | 0.0001 | - |
| 2.0128 | 17940 | 0.0005 | - |
| 2.0139 | 17950 | 0.0003 | - |
| 2.0150 | 17960 | 0.0 | - |
| 2.0162 | 17970 | 0.0002 | - |
| 2.0173 | 17980 | 0.0005 | - |
| 2.0184 | 17990 | 0.0002 | - |
| 2.0195 | 18000 | 0.0012 | - |
| 2.0206 | 18010 | 0.0011 | - |
| 2.0218 | 18020 | 0.0006 | - |
| 2.0229 | 18030 | 0.0001 | - |
| 2.0240 | 18040 | 0.0 | - |
| 2.0251 | 18050 | 0.0002 | - |
| 2.0263 | 18060 | 0.0003 | - |
| 2.0274 | 18070 | 0.0 | - |
| 2.0285 | 18080 | 0.0002 | - |
| 2.0296 | 18090 | 0.0 | - |
| 2.0307 | 18100 | 0.0001 | - |
| 2.0319 | 18110 | 0.0003 | - |
| 2.0330 | 18120 | 0.0 | - |
| 2.0341 | 18130 | 0.0001 | - |
| 2.0352 | 18140 | 0.0 | - |
| 2.0364 | 18150 | 0.0001 | - |
| 2.0375 | 18160 | 0.0002 | - |
| 2.0386 | 18170 | 0.0002 | - |
| 2.0397 | 18180 | 0.0003 | - |
| 2.0408 | 18190 | 0.0001 | - |
| 2.0420 | 18200 | 0.0004 | - |
| 2.0431 | 18210 | 0.0005 | - |
| 2.0442 | 18220 | 0.0 | - |
| 2.0453 | 18230 | 0.0003 | - |
| 2.0464 | 18240 | 0.0003 | - |
| 2.0476 | 18250 | 0.0 | - |
| 2.0487 | 18260 | 0.0002 | - |
| 2.0498 | 18270 | 0.0001 | - |
| 2.0509 | 18280 | 0.0002 | - |
| 2.0521 | 18290 | 0.0006 | - |
| 2.0532 | 18300 | 0.0002 | - |
| 2.0543 | 18310 | 0.0005 | - |
| 2.0554 | 18320 | 0.0001 | - |
| 2.0565 | 18330 | 0.0004 | - |
| 2.0577 | 18340 | 0.0 | - |
| 2.0588 | 18350 | 0.001 | - |
| 2.0599 | 18360 | 0.002 | - |
| 2.0610 | 18370 | 0.0003 | - |
| 2.0622 | 18380 | 0.0009 | - |
| 2.0633 | 18390 | 0.0001 | - |
| 2.0644 | 18400 | 0.0001 | - |
| 2.0655 | 18410 | 0.0003 | - |
| 2.0666 | 18420 | 0.0012 | - |
| 2.0678 | 18430 | 0.0009 | - |
| 2.0689 | 18440 | 0.0001 | - |
| 2.0700 | 18450 | 0.0001 | - |
| 2.0711 | 18460 | 0.0 | - |
| 2.0723 | 18470 | 0.0002 | - |
| 2.0734 | 18480 | 0.0006 | - |
| 2.0745 | 18490 | 0.0001 | - |
| 2.0756 | 18500 | 0.0001 | - |
| 2.0767 | 18510 | 0.0 | - |
| 2.0779 | 18520 | 0.0001 | - |
| 2.0790 | 18530 | 0.0002 | - |
| 2.0801 | 18540 | 0.0001 | - |
| 2.0812 | 18550 | 0.0 | - |
| 2.0824 | 18560 | 0.0004 | - |
| 2.0835 | 18570 | 0.0001 | - |
| 2.0846 | 18580 | 0.0034 | - |
| 2.0857 | 18590 | 0.0001 | - |
| 2.0868 | 18600 | 0.0002 | - |
| 2.0880 | 18610 | 0.0001 | - |
| 2.0891 | 18620 | 0.0002 | - |
| 2.0902 | 18630 | 0.0001 | - |
| 2.0913 | 18640 | 0.0001 | - |
| 2.0924 | 18650 | 0.0 | - |
| 2.0936 | 18660 | 0.0001 | - |
| 2.0947 | 18670 | 0.0001 | - |
| 2.0958 | 18680 | 0.0002 | - |
| 2.0969 | 18690 | 0.0001 | - |
| 2.0981 | 18700 | 0.002 | - |
| 2.0992 | 18710 | 0.0001 | - |
| 2.1003 | 18720 | 0.0001 | - |
| 2.1014 | 18730 | 0.0004 | - |
| 2.1025 | 18740 | 0.0004 | - |
| 2.1037 | 18750 | 0.0001 | - |
| 2.1048 | 18760 | 0.0003 | - |
| 2.1059 | 18770 | 0.0006 | - |
| 2.1070 | 18780 | 0.0001 | - |
| 2.1082 | 18790 | 0.0001 | - |
| 2.1093 | 18800 | 0.0003 | - |
| 2.1104 | 18810 | 0.0 | - |
| 2.1115 | 18820 | 0.0 | - |
| 2.1126 | 18830 | 0.0 | - |
| 2.1138 | 18840 | 0.0001 | - |
| 2.1149 | 18850 | 0.0003 | - |
| 2.1160 | 18860 | 0.0012 | - |
| 2.1171 | 18870 | 0.0001 | - |
| 2.1183 | 18880 | 0.0 | - |
| 2.1194 | 18890 | 0.0001 | - |
| 2.1205 | 18900 | 0.0001 | - |
| 2.1216 | 18910 | 0.0001 | - |
| 2.1227 | 18920 | 0.0 | - |
| 2.1239 | 18930 | 0.0 | - |
| 2.1250 | 18940 | 0.0002 | - |
| 2.1261 | 18950 | 0.0001 | - |
| 2.1272 | 18960 | 0.0 | - |
| 2.1284 | 18970 | 0.0 | - |
| 2.1295 | 18980 | 0.0012 | - |
| 2.1306 | 18990 | 0.0 | - |
| 2.1317 | 19000 | 0.0002 | - |
| 2.1328 | 19010 | 0.0002 | - |
| 2.1340 | 19020 | 0.0002 | - |
| 2.1351 | 19030 | 0.0001 | - |
| 2.1362 | 19040 | 0.0002 | - |
| 2.1373 | 19050 | 0.0001 | - |
| 2.1384 | 19060 | 0.0004 | - |
| 2.1396 | 19070 | 0.0003 | - |
| 2.1407 | 19080 | 0.0 | - |
| 2.1418 | 19090 | 0.0003 | - |
| 2.1429 | 19100 | 0.0001 | - |
| 2.1441 | 19110 | 0.0001 | - |
| 2.1452 | 19120 | 0.0002 | - |
| 2.1463 | 19130 | 0.0007 | - |
| 2.1474 | 19140 | 0.0002 | - |
| 2.1485 | 19150 | 0.0002 | - |
| 2.1497 | 19160 | 0.0 | - |
| 2.1508 | 19170 | 0.0001 | - |
| 2.1519 | 19180 | 0.0 | - |
| 2.1530 | 19190 | 0.0 | - |
| 2.1542 | 19200 | 0.0005 | - |
| 2.1553 | 19210 | 0.0001 | - |
| 2.1564 | 19220 | 0.0001 | - |
| 2.1575 | 19230 | 0.0001 | - |
| 2.1586 | 19240 | 0.0004 | - |
| 2.1598 | 19250 | 0.0 | - |
| 2.1609 | 19260 | 0.0001 | - |
| 2.1620 | 19270 | 0.0003 | - |
| 2.1631 | 19280 | 0.0 | - |
| 2.1643 | 19290 | 0.0003 | - |
| 2.1654 | 19300 | 0.0 | - |
| 2.1665 | 19310 | 0.0027 | - |
| 2.1676 | 19320 | 0.0001 | - |
| 2.1687 | 19330 | 0.0 | - |
| 2.1699 | 19340 | 0.0003 | - |
| 2.1710 | 19350 | 0.0 | - |
| 2.1721 | 19360 | 0.0 | - |
| 2.1732 | 19370 | 0.0001 | - |
| 2.1744 | 19380 | 0.0001 | - |
| 2.1755 | 19390 | 0.0 | - |
| 2.1766 | 19400 | 0.0001 | - |
| 2.1777 | 19410 | 0.0001 | - |
| 2.1788 | 19420 | 0.0001 | - |
| 2.1800 | 19430 | 0.0 | - |
| 2.1811 | 19440 | 0.0001 | - |
| 2.1822 | 19450 | 0.0004 | - |
| 2.1833 | 19460 | 0.0001 | - |
| 2.1844 | 19470 | 0.0 | - |
| 2.1856 | 19480 | 0.0001 | - |
| 2.1867 | 19490 | 0.0001 | - |
| 2.1878 | 19500 | 0.0001 | - |
| 2.1889 | 19510 | 0.0002 | - |
| 2.1901 | 19520 | 0.0003 | - |
| 2.1912 | 19530 | 0.0002 | - |
| 2.1923 | 19540 | 0.0001 | - |
| 2.1934 | 19550 | 0.0001 | - |
| 2.1945 | 19560 | 0.0005 | - |
| 2.1957 | 19570 | 0.0001 | - |
| 2.1968 | 19580 | 0.0001 | - |
| 2.1979 | 19590 | 0.0002 | - |
| 2.1990 | 19600 | 0.0001 | - |
| 2.2002 | 19610 | 0.0 | - |
| 2.2013 | 19620 | 0.0 | - |
| 2.2024 | 19630 | 0.0001 | - |
| 2.2035 | 19640 | 0.0003 | - |
| 2.2046 | 19650 | 0.0005 | - |
| 2.2058 | 19660 | 0.0 | - |
| 2.2069 | 19670 | 0.0001 | - |
| 2.2080 | 19680 | 0.0 | - |
| 2.2091 | 19690 | 0.0 | - |
| 2.2103 | 19700 | 0.0006 | - |
| 2.2114 | 19710 | 0.0025 | - |
| 2.2125 | 19720 | 0.0001 | - |
| 2.2136 | 19730 | 0.0006 | - |
| 2.2147 | 19740 | 0.0001 | - |
| 2.2159 | 19750 | 0.0001 | - |
| 2.2170 | 19760 | 0.0001 | - |
| 2.2181 | 19770 | 0.0002 | - |
| 2.2192 | 19780 | 0.0001 | - |
| 2.2204 | 19790 | 0.0003 | - |
| 2.2215 | 19800 | 0.0001 | - |
| 2.2226 | 19810 | 0.0001 | - |
| 2.2237 | 19820 | 0.0003 | - |
| 2.2248 | 19830 | 0.0001 | - |
| 2.2260 | 19840 | 0.0001 | - |
| 2.2271 | 19850 | 0.0003 | - |
| 2.2282 | 19860 | 0.0107 | - |
| 2.2293 | 19870 | 0.0066 | - |
| 2.2304 | 19880 | 0.0001 | - |
| 2.2316 | 19890 | 0.0 | - |
| 2.2327 | 19900 | 0.0 | - |
| 2.2338 | 19910 | 0.0001 | - |
| 2.2349 | 19920 | 0.0008 | - |
| 2.2361 | 19930 | 0.0 | - |
| 2.2372 | 19940 | 0.0001 | - |
| 2.2383 | 19950 | 0.0005 | - |
| 2.2394 | 19960 | 0.0 | - |
| 2.2405 | 19970 | 0.0028 | - |
| 2.2417 | 19980 | 0.0001 | - |
| 2.2428 | 19990 | 0.0 | - |
| 2.2439 | 20000 | 0.0001 | - |
| 2.2450 | 20010 | 0.0 | - |
| 2.2462 | 20020 | 0.0001 | - |
| 2.2473 | 20030 | 0.0002 | - |
| 2.2484 | 20040 | 0.0001 | - |
| 2.2495 | 20050 | 0.0003 | - |
| 2.2506 | 20060 | 0.0003 | - |
| 2.2518 | 20070 | 0.0001 | - |
| 2.2529 | 20080 | 0.0001 | - |
| 2.2540 | 20090 | 0.0001 | - |
| 2.2551 | 20100 | 0.0001 | - |
| 2.2563 | 20110 | 0.0002 | - |
| 2.2574 | 20120 | 0.0 | - |
| 2.2585 | 20130 | 0.0 | - |
| 2.2596 | 20140 | 0.0 | - |
| 2.2607 | 20150 | 0.0001 | - |
| 2.2619 | 20160 | 0.0 | - |
| 2.2630 | 20170 | 0.0 | - |
| 2.2641 | 20180 | 0.0 | - |
| 2.2652 | 20190 | 0.0 | - |
| 2.2664 | 20200 | 0.0001 | - |
| 2.2675 | 20210 | 0.0003 | - |
| 2.2686 | 20220 | 0.0001 | - |
| 2.2697 | 20230 | 0.0 | - |
| 2.2708 | 20240 | 0.0001 | - |
| 2.2720 | 20250 | 0.0 | - |
| 2.2731 | 20260 | 0.0001 | - |
| 2.2742 | 20270 | 0.0001 | - |
| 2.2753 | 20280 | 0.0001 | - |
| 2.2765 | 20290 | 0.0007 | - |
| 2.2776 | 20300 | 0.0 | - |
| 2.2787 | 20310 | 0.0 | - |
| 2.2798 | 20320 | 0.0006 | - |
| 2.2809 | 20330 | 0.0 | - |
| 2.2821 | 20340 | 0.0001 | - |
| 2.2832 | 20350 | 0.0002 | - |
| 2.2843 | 20360 | 0.0001 | - |
| 2.2854 | 20370 | 0.0001 | - |
| 2.2865 | 20380 | 0.0001 | - |
| 2.2877 | 20390 | 0.0002 | - |
| 2.2888 | 20400 | 0.0002 | - |
| 2.2899 | 20410 | 0.0001 | - |
| 2.2910 | 20420 | 0.0001 | - |
| 2.2922 | 20430 | 0.0 | - |
| 2.2933 | 20440 | 0.0001 | - |
| 2.2944 | 20450 | 0.0001 | - |
| 2.2955 | 20460 | 0.0003 | - |
| 2.2966 | 20470 | 0.0001 | - |
| 2.2978 | 20480 | 0.0 | - |
| 2.2989 | 20490 | 0.0003 | - |
| 2.3000 | 20500 | 0.0002 | - |
| 2.3011 | 20510 | 0.0 | - |
| 2.3023 | 20520 | 0.0002 | - |
| 2.3034 | 20530 | 0.0006 | - |
| 2.3045 | 20540 | 0.0003 | - |
| 2.3056 | 20550 | 0.0002 | - |
| 2.3067 | 20560 | 0.0 | - |
| 2.3079 | 20570 | 0.0003 | - |
| 2.3090 | 20580 | 0.0 | - |
| 2.3101 | 20590 | 0.0001 | - |
| 2.3112 | 20600 | 0.0003 | - |
| 2.3124 | 20610 | 0.0001 | - |
| 2.3135 | 20620 | 0.0002 | - |
| 2.3146 | 20630 | 0.0001 | - |
| 2.3157 | 20640 | 0.001 | - |
| 2.3168 | 20650 | 0.0001 | - |
| 2.3180 | 20660 | 0.0001 | - |
| 2.3191 | 20670 | 0.0012 | - |
| 2.3202 | 20680 | 0.0 | - |
| 2.3213 | 20690 | 0.0001 | - |
| 2.3225 | 20700 | 0.0 | - |
| 2.3236 | 20710 | 0.0 | - |
| 2.3247 | 20720 | 0.0001 | - |
| 2.3258 | 20730 | 0.0008 | - |
| 2.3269 | 20740 | 0.0003 | - |
| 2.3281 | 20750 | 0.0011 | - |
| 2.3292 | 20760 | 0.0002 | - |
| 2.3303 | 20770 | 0.0001 | - |
| 2.3314 | 20780 | 0.0003 | - |
| 2.3325 | 20790 | 0.0004 | - |
| 2.3337 | 20800 | 0.0003 | - |
| 2.3348 | 20810 | 0.0001 | - |
| 2.3359 | 20820 | 0.0 | - |
| 2.3370 | 20830 | 0.0 | - |
| 2.3382 | 20840 | 0.0001 | - |
| 2.3393 | 20850 | 0.0004 | - |
| 2.3404 | 20860 | 0.0003 | - |
| 2.3415 | 20870 | 0.0002 | - |
| 2.3426 | 20880 | 0.0 | - |
| 2.3438 | 20890 | 0.0001 | - |
| 2.3449 | 20900 | 0.0 | - |
| 2.3460 | 20910 | 0.0 | - |
| 2.3471 | 20920 | 0.0001 | - |
| 2.3483 | 20930 | 0.0 | - |
| 2.3494 | 20940 | 0.0 | - |
| 2.3505 | 20950 | 0.0001 | - |
| 2.3516 | 20960 | 0.0 | - |
| 2.3527 | 20970 | 0.0 | - |
| 2.3539 | 20980 | 0.0 | - |
| 2.3550 | 20990 | 0.0001 | - |
| 2.3561 | 21000 | 0.0007 | - |
| 2.3572 | 21010 | 0.0001 | - |
| 2.3584 | 21020 | 0.0002 | - |
| 2.3595 | 21030 | 0.0 | - |
| 2.3606 | 21040 | 0.0002 | - |
| 2.3617 | 21050 | 0.0001 | - |
| 2.3628 | 21060 | 0.0001 | - |
| 2.3640 | 21070 | 0.0002 | - |
| 2.3651 | 21080 | 0.0002 | - |
| 2.3662 | 21090 | 0.0002 | - |
| 2.3673 | 21100 | 0.0 | - |
| 2.3685 | 21110 | 0.0001 | - |
| 2.3696 | 21120 | 0.0 | - |
| 2.3707 | 21130 | 0.0003 | - |
| 2.3718 | 21140 | 0.0001 | - |
| 2.3729 | 21150 | 0.0001 | - |
| 2.3741 | 21160 | 0.0002 | - |
| 2.3752 | 21170 | 0.0 | - |
| 2.3763 | 21180 | 0.0 | - |
| 2.3774 | 21190 | 0.0001 | - |
| 2.3785 | 21200 | 0.0 | - |
| 2.3797 | 21210 | 0.0003 | - |
| 2.3808 | 21220 | 0.0 | - |
| 2.3819 | 21230 | 0.0 | - |
| 2.3830 | 21240 | 0.0 | - |
| 2.3842 | 21250 | 0.0 | - |
| 2.3853 | 21260 | 0.0003 | - |
| 2.3864 | 21270 | 0.0003 | - |
| 2.3875 | 21280 | 0.0 | - |
| 2.3886 | 21290 | 0.0002 | - |
| 2.3898 | 21300 | 0.0001 | - |
| 2.3909 | 21310 | 0.0 | - |
| 2.3920 | 21320 | 0.0 | - |
| 2.3931 | 21330 | 0.0005 | - |
| 2.3943 | 21340 | 0.0001 | - |
| 2.3954 | 21350 | 0.0002 | - |
| 2.3965 | 21360 | 0.0006 | - |
| 2.3976 | 21370 | 0.0001 | - |
| 2.3987 | 21380 | 0.0001 | - |
| 2.3999 | 21390 | 0.0 | - |
| 2.4010 | 21400 | 0.0001 | - |
| 2.4021 | 21410 | 0.0001 | - |
| 2.4032 | 21420 | 0.0005 | - |
| 2.4044 | 21430 | 0.0004 | - |
| 2.4055 | 21440 | 0.0001 | - |
| 2.4066 | 21450 | 0.0001 | - |
| 2.4077 | 21460 | 0.0 | - |
| 2.4088 | 21470 | 0.0 | - |
| 2.4100 | 21480 | 0.0004 | - |
| 2.4111 | 21490 | 0.0002 | - |
| 2.4122 | 21500 | 0.0003 | - |
| 2.4133 | 21510 | 0.0001 | - |
| 2.4145 | 21520 | 0.0001 | - |
| 2.4156 | 21530 | 0.0002 | - |
| 2.4167 | 21540 | 0.0001 | - |
| 2.4178 | 21550 | 0.0002 | - |
| 2.4189 | 21560 | 0.0001 | - |
| 2.4201 | 21570 | 0.0001 | - |
| 2.4212 | 21580 | 0.0001 | - |
| 2.4223 | 21590 | 0.0002 | - |
| 2.4234 | 21600 | 0.0 | - |
| 2.4245 | 21610 | 0.0001 | - |
| 2.4257 | 21620 | 0.0004 | - |
| 2.4268 | 21630 | 0.0009 | - |
| 2.4279 | 21640 | 0.0001 | - |
| 2.4290 | 21650 | 0.0001 | - |
| 2.4302 | 21660 | 0.0001 | - |
| 2.4313 | 21670 | 0.0001 | - |
| 2.4324 | 21680 | 0.0 | - |
| 2.4335 | 21690 | 0.0002 | - |
| 2.4346 | 21700 | 0.0001 | - |
| 2.4358 | 21710 | 0.0001 | - |
| 2.4369 | 21720 | 0.0 | - |
| 2.4380 | 21730 | 0.0001 | - |
| 2.4391 | 21740 | 0.0003 | - |
| 2.4403 | 21750 | 0.0001 | - |
| 2.4414 | 21760 | 0.0001 | - |
| 2.4425 | 21770 | 0.0 | - |
| 2.4436 | 21780 | 0.0 | - |
| 2.4447 | 21790 | 0.0 | - |
| 2.4459 | 21800 | 0.0001 | - |
| 2.4470 | 21810 | 0.0 | - |
| 2.4481 | 21820 | 0.0 | - |
| 2.4492 | 21830 | 0.0001 | - |
| 2.4504 | 21840 | 0.0005 | - |
| 2.4515 | 21850 | 0.0 | - |
| 2.4526 | 21860 | 0.0001 | - |
| 2.4537 | 21870 | 0.0001 | - |
| 2.4548 | 21880 | 0.0001 | - |
| 2.4560 | 21890 | 0.0001 | - |
| 2.4571 | 21900 | 0.0 | - |
| 2.4582 | 21910 | 0.0002 | - |
| 2.4593 | 21920 | 0.0001 | - |
| 2.4605 | 21930 | 0.0004 | - |
| 2.4616 | 21940 | 0.0026 | - |
| 2.4627 | 21950 | 0.0001 | - |
| 2.4638 | 21960 | 0.0004 | - |
| 2.4649 | 21970 | 0.0001 | - |
| 2.4661 | 21980 | 0.0002 | - |
| 2.4672 | 21990 | 0.0 | - |
| 2.4683 | 22000 | 0.0001 | - |
| 2.4694 | 22010 | 0.0001 | - |
| 2.4705 | 22020 | 0.0 | - |
| 2.4717 | 22030 | 0.0001 | - |
| 2.4728 | 22040 | 0.0001 | - |
| 2.4739 | 22050 | 0.0014 | - |
| 2.4750 | 22060 | 0.0001 | - |
| 2.4762 | 22070 | 0.0006 | - |
| 2.4773 | 22080 | 0.0 | - |
| 2.4784 | 22090 | 0.0002 | - |
| 2.4795 | 22100 | 0.0 | - |
| 2.4806 | 22110 | 0.0001 | - |
| 2.4818 | 22120 | 0.0 | - |
| 2.4829 | 22130 | 0.0001 | - |
| 2.4840 | 22140 | 0.0001 | - |
| 2.4851 | 22150 | 0.0001 | - |
| 2.4863 | 22160 | 0.0 | - |
| 2.4874 | 22170 | 0.0003 | - |
| 2.4885 | 22180 | 0.0 | - |
| 2.4896 | 22190 | 0.0002 | - |
| 2.4907 | 22200 | 0.0 | - |
| 2.4919 | 22210 | 0.0012 | - |
| 2.4930 | 22220 | 0.0 | - |
| 2.4941 | 22230 | 0.0001 | - |
| 2.4952 | 22240 | 0.0 | - |
| 2.4964 | 22250 | 0.0001 | - |
| 2.4975 | 22260 | 0.0002 | - |
| 2.4986 | 22270 | 0.0 | - |
| 2.4997 | 22280 | 0.0001 | - |
| 2.5008 | 22290 | 0.0002 | - |
| 2.5020 | 22300 | 0.0 | - |
| 2.5031 | 22310 | 0.0001 | - |
| 2.5042 | 22320 | 0.0 | - |
| 2.5053 | 22330 | 0.0001 | - |
| 2.5065 | 22340 | 0.0003 | - |
| 2.5076 | 22350 | 0.0 | - |
| 2.5087 | 22360 | 0.0001 | - |
| 2.5098 | 22370 | 0.0001 | - |
| 2.5109 | 22380 | 0.0001 | - |
| 2.5121 | 22390 | 0.0 | - |
| 2.5132 | 22400 | 0.0001 | - |
| 2.5143 | 22410 | 0.0001 | - |
| 2.5154 | 22420 | 0.0 | - |
| 2.5165 | 22430 | 0.0001 | - |
| 2.5177 | 22440 | 0.0 | - |
| 2.5188 | 22450 | 0.0001 | - |
| 2.5199 | 22460 | 0.0 | - |
| 2.5210 | 22470 | 0.0001 | - |
| 2.5222 | 22480 | 0.0001 | - |
| 2.5233 | 22490 | 0.0003 | - |
| 2.5244 | 22500 | 0.0001 | - |
| 2.5255 | 22510 | 0.0 | - |
| 2.5266 | 22520 | 0.0 | - |
| 2.5278 | 22530 | 0.0004 | - |
| 2.5289 | 22540 | 0.0001 | - |
| 2.5300 | 22550 | 0.0002 | - |
| 2.5311 | 22560 | 0.0 | - |
| 2.5323 | 22570 | 0.0 | - |
| 2.5334 | 22580 | 0.0001 | - |
| 2.5345 | 22590 | 0.0002 | - |
| 2.5356 | 22600 | 0.0 | - |
| 2.5367 | 22610 | 0.0002 | - |
| 2.5379 | 22620 | 0.0002 | - |
| 2.5390 | 22630 | 0.0 | - |
| 2.5401 | 22640 | 0.0001 | - |
| 2.5412 | 22650 | 0.0 | - |
| 2.5424 | 22660 | 0.0001 | - |
| 2.5435 | 22670 | 0.0 | - |
| 2.5446 | 22680 | 0.0 | - |
| 2.5457 | 22690 | 0.0001 | - |
| 2.5468 | 22700 | 0.0002 | - |
| 2.5480 | 22710 | 0.0 | - |
| 2.5491 | 22720 | 0.0 | - |
| 2.5502 | 22730 | 0.0001 | - |
| 2.5513 | 22740 | 0.0005 | - |
| 2.5525 | 22750 | 0.0004 | - |
| 2.5536 | 22760 | 0.0002 | - |
| 2.5547 | 22770 | 0.0001 | - |
| 2.5558 | 22780 | 0.0001 | - |
| 2.5569 | 22790 | 0.0008 | - |
| 2.5581 | 22800 | 0.0002 | - |
| 2.5592 | 22810 | 0.0001 | - |
| 2.5603 | 22820 | 0.0 | - |
| 2.5614 | 22830 | 0.0 | - |
| 2.5625 | 22840 | 0.0002 | - |
| 2.5637 | 22850 | 0.0001 | - |
| 2.5648 | 22860 | 0.0003 | - |
| 2.5659 | 22870 | 0.0 | - |
| 2.5670 | 22880 | 0.0 | - |
| 2.5682 | 22890 | 0.0004 | - |
| 2.5693 | 22900 | 0.0001 | - |
| 2.5704 | 22910 | 0.0004 | - |
| 2.5715 | 22920 | 0.0001 | - |
| 2.5726 | 22930 | 0.0001 | - |
| 2.5738 | 22940 | 0.0001 | - |
| 2.5749 | 22950 | 0.0 | - |
| 2.5760 | 22960 | 0.0002 | - |
| 2.5771 | 22970 | 0.0 | - |
| 2.5783 | 22980 | 0.0001 | - |
| 2.5794 | 22990 | 0.0001 | - |
| 2.5805 | 23000 | 0.0002 | - |
| 2.5816 | 23010 | 0.0001 | - |
| 2.5827 | 23020 | 0.0001 | - |
| 2.5839 | 23030 | 0.0 | - |
| 2.5850 | 23040 | 0.0 | - |
| 2.5861 | 23050 | 0.0001 | - |
| 2.5872 | 23060 | 0.0 | - |
| 2.5884 | 23070 | 0.0001 | - |
| 2.5895 | 23080 | 0.0 | - |
| 2.5906 | 23090 | 0.0 | - |
| 2.5917 | 23100 | 0.0001 | - |
| 2.5928 | 23110 | 0.0001 | - |
| 2.5940 | 23120 | 0.0001 | - |
| 2.5951 | 23130 | 0.0001 | - |
| 2.5962 | 23140 | 0.0001 | - |
| 2.5973 | 23150 | 0.0001 | - |
| 2.5985 | 23160 | 0.0003 | - |
| 2.5996 | 23170 | 0.0 | - |
| 2.6007 | 23180 | 0.0001 | - |
| 2.6018 | 23190 | 0.0 | - |
| 2.6029 | 23200 | 0.0001 | - |
| 2.6041 | 23210 | 0.0001 | - |
| 2.6052 | 23220 | 0.0 | - |
| 2.6063 | 23230 | 0.0 | - |
| 2.6074 | 23240 | 0.0001 | - |
| 2.6085 | 23250 | 0.0001 | - |
| 2.6097 | 23260 | 0.0 | - |
| 2.6108 | 23270 | 0.0001 | - |
| 2.6119 | 23280 | 0.0001 | - |
| 2.6130 | 23290 | 0.0 | - |
| 2.6142 | 23300 | 0.0002 | - |
| 2.6153 | 23310 | 0.0003 | - |
| 2.6164 | 23320 | 0.0 | - |
| 2.6175 | 23330 | 0.0002 | - |
| 2.6186 | 23340 | 0.0001 | - |
| 2.6198 | 23350 | 0.0001 | - |
| 2.6209 | 23360 | 0.0 | - |
| 2.6220 | 23370 | 0.0 | - |
| 2.6231 | 23380 | 0.0002 | - |
| 2.6243 | 23390 | 0.0001 | - |
| 2.6254 | 23400 | 0.0001 | - |
| 2.6265 | 23410 | 0.0001 | - |
| 2.6276 | 23420 | 0.0001 | - |
| 2.6287 | 23430 | 0.0001 | - |
| 2.6299 | 23440 | 0.0002 | - |
| 2.6310 | 23450 | 0.0 | - |
| 2.6321 | 23460 | 0.0 | - |
| 2.6332 | 23470 | 0.0001 | - |
| 2.6344 | 23480 | 0.0 | - |
| 2.6355 | 23490 | 0.0 | - |
| 2.6366 | 23500 | 0.0003 | - |
| 2.6377 | 23510 | 0.0001 | - |
| 2.6388 | 23520 | 0.0001 | - |
| 2.6400 | 23530 | 0.0001 | - |
| 2.6411 | 23540 | 0.0 | - |
| 2.6422 | 23550 | 0.0 | - |
| 2.6433 | 23560 | 0.0001 | - |
| 2.6445 | 23570 | 0.0 | - |
| 2.6456 | 23580 | 0.0002 | - |
| 2.6467 | 23590 | 0.0 | - |
| 2.6478 | 23600 | 0.0001 | - |
| 2.6489 | 23610 | 0.0 | - |
| 2.6501 | 23620 | 0.0001 | - |
| 2.6512 | 23630 | 0.0001 | - |
| 2.6523 | 23640 | 0.0001 | - |
| 2.6534 | 23650 | 0.0 | - |
| 2.6545 | 23660 | 0.0003 | - |
| 2.6557 | 23670 | 0.0 | - |
| 2.6568 | 23680 | 0.0 | - |
| 2.6579 | 23690 | 0.0001 | - |
| 2.6590 | 23700 | 0.0 | - |
| 2.6602 | 23710 | 0.0001 | - |
| 2.6613 | 23720 | 0.0 | - |
| 2.6624 | 23730 | 0.0001 | - |
| 2.6635 | 23740 | 0.0001 | - |
| 2.6646 | 23750 | 0.0001 | - |
| 2.6658 | 23760 | 0.0 | - |
| 2.6669 | 23770 | 0.0 | - |
| 2.6680 | 23780 | 0.0004 | - |
| 2.6691 | 23790 | 0.0 | - |
| 2.6703 | 23800 | 0.0001 | - |
| 2.6714 | 23810 | 0.0 | - |
| 2.6725 | 23820 | 0.0001 | - |
| 2.6736 | 23830 | 0.0 | - |
| 2.6747 | 23840 | 0.0 | - |
| 2.6759 | 23850 | 0.0 | - |
| 2.6770 | 23860 | 0.0 | - |
| 2.6781 | 23870 | 0.0001 | - |
| 2.6792 | 23880 | 0.0004 | - |
| 2.6804 | 23890 | 0.0001 | - |
| 2.6815 | 23900 | 0.0001 | - |
| 2.6826 | 23910 | 0.0001 | - |
| 2.6837 | 23920 | 0.0001 | - |
| 2.6848 | 23930 | 0.0005 | - |
| 2.6860 | 23940 | 0.0003 | - |
| 2.6871 | 23950 | 0.0005 | - |
| 2.6882 | 23960 | 0.0 | - |
| 2.6893 | 23970 | 0.0 | - |
| 2.6905 | 23980 | 0.0003 | - |
| 2.6916 | 23990 | 0.0001 | - |
| 2.6927 | 24000 | 0.0 | - |
| 2.6938 | 24010 | 0.0 | - |
| 2.6949 | 24020 | 0.0001 | - |
| 2.6961 | 24030 | 0.0006 | - |
| 2.6972 | 24040 | 0.0002 | - |
| 2.6983 | 24050 | 0.0003 | - |
| 2.6994 | 24060 | 0.0001 | - |
| 2.7005 | 24070 | 0.0001 | - |
| 2.7017 | 24080 | 0.0 | - |
| 2.7028 | 24090 | 0.0 | - |
| 2.7039 | 24100 | 0.0001 | - |
| 2.7050 | 24110 | 0.0001 | - |
| 2.7062 | 24120 | 0.0002 | - |
| 2.7073 | 24130 | 0.0 | - |
| 2.7084 | 24140 | 0.0 | - |
| 2.7095 | 24150 | 0.0001 | - |
| 2.7106 | 24160 | 0.0 | - |
| 2.7118 | 24170 | 0.0001 | - |
| 2.7129 | 24180 | 0.0 | - |
| 2.7140 | 24190 | 0.0001 | - |
| 2.7151 | 24200 | 0.0 | - |
| 2.7163 | 24210 | 0.0002 | - |
| 2.7174 | 24220 | 0.0 | - |
| 2.7185 | 24230 | 0.0004 | - |
| 2.7196 | 24240 | 0.0003 | - |
| 2.7207 | 24250 | 0.0003 | - |
| 2.7219 | 24260 | 0.0002 | - |
| 2.7230 | 24270 | 0.0 | - |
| 2.7241 | 24280 | 0.0001 | - |
| 2.7252 | 24290 | 0.0 | - |
| 2.7264 | 24300 | 0.0 | - |
| 2.7275 | 24310 | 0.0002 | - |
| 2.7286 | 24320 | 0.0001 | - |
| 2.7297 | 24330 | 0.0 | - |
| 2.7308 | 24340 | 0.0001 | - |
| 2.7320 | 24350 | 0.0001 | - |
| 2.7331 | 24360 | 0.0002 | - |
| 2.7342 | 24370 | 0.0 | - |
| 2.7353 | 24380 | 0.0002 | - |
| 2.7365 | 24390 | 0.0001 | - |
| 2.7376 | 24400 | 0.0002 | - |
| 2.7387 | 24410 | 0.0002 | - |
| 2.7398 | 24420 | 0.0 | - |
| 2.7409 | 24430 | 0.0 | - |
| 2.7421 | 24440 | 0.0001 | - |
| 2.7432 | 24450 | 0.0002 | - |
| 2.7443 | 24460 | 0.0001 | - |
| 2.7454 | 24470 | 0.0002 | - |
| 2.7465 | 24480 | 0.0001 | - |
| 2.7477 | 24490 | 0.0 | - |
| 2.7488 | 24500 | 0.0001 | - |
| 2.7499 | 24510 | 0.0 | - |
| 2.7510 | 24520 | 0.0001 | - |
| 2.7522 | 24530 | 0.0 | - |
| 2.7533 | 24540 | 0.0 | - |
| 2.7544 | 24550 | 0.0001 | - |
| 2.7555 | 24560 | 0.0001 | - |
| 2.7566 | 24570 | 0.0002 | - |
| 2.7578 | 24580 | 0.0001 | - |
| 2.7589 | 24590 | 0.0002 | - |
| 2.7600 | 24600 | 0.0 | - |
| 2.7611 | 24610 | 0.0 | - |
| 2.7623 | 24620 | 0.0006 | - |
| 2.7634 | 24630 | 0.0003 | - |
| 2.7645 | 24640 | 0.0 | - |
| 2.7656 | 24650 | 0.0 | - |
| 2.7667 | 24660 | 0.0 | - |
| 2.7679 | 24670 | 0.0001 | - |
| 2.7690 | 24680 | 0.0001 | - |
| 2.7701 | 24690 | 0.0002 | - |
| 2.7712 | 24700 | 0.0004 | - |
| 2.7724 | 24710 | 0.0 | - |
| 2.7735 | 24720 | 0.0 | - |
| 2.7746 | 24730 | 0.0001 | - |
| 2.7757 | 24740 | 0.0001 | - |
| 2.7768 | 24750 | 0.0 | - |
| 2.7780 | 24760 | 0.0001 | - |
| 2.7791 | 24770 | 0.0 | - |
| 2.7802 | 24780 | 0.0002 | - |
| 2.7813 | 24790 | 0.0 | - |
| 2.7825 | 24800 | 0.0 | - |
| 2.7836 | 24810 | 0.0 | - |
| 2.7847 | 24820 | 0.0 | - |
| 2.7858 | 24830 | 0.0 | - |
| 2.7869 | 24840 | 0.0 | - |
| 2.7881 | 24850 | 0.0004 | - |
| 2.7892 | 24860 | 0.0 | - |
| 2.7903 | 24870 | 0.0 | - |
| 2.7914 | 24880 | 0.0 | - |
| 2.7926 | 24890 | 0.0 | - |
| 2.7937 | 24900 | 0.0002 | - |
| 2.7948 | 24910 | 0.0001 | - |
| 2.7959 | 24920 | 0.0 | - |
| 2.7970 | 24930 | 0.0 | - |
| 2.7982 | 24940 | 0.0001 | - |
| 2.7993 | 24950 | 0.0 | - |
| 2.8004 | 24960 | 0.0 | - |
| 2.8015 | 24970 | 0.0001 | - |
| 2.8026 | 24980 | 0.0 | - |
| 2.8038 | 24990 | 0.0001 | - |
| 2.8049 | 25000 | 0.0 | - |
| 2.8060 | 25010 | 0.0001 | - |
| 2.8071 | 25020 | 0.0001 | - |
| 2.8083 | 25030 | 0.0 | - |
| 2.8094 | 25040 | 0.0 | - |
| 2.8105 | 25050 | 0.0 | - |
| 2.8116 | 25060 | 0.0 | - |
| 2.8127 | 25070 | 0.0001 | - |
| 2.8139 | 25080 | 0.0 | - |
| 2.8150 | 25090 | 0.0001 | - |
| 2.8161 | 25100 | 0.0003 | - |
| 2.8172 | 25110 | 0.0002 | - |
| 2.8184 | 25120 | 0.0 | - |
| 2.8195 | 25130 | 0.0 | - |
| 2.8206 | 25140 | 0.0 | - |
| 2.8217 | 25150 | 0.0 | - |
| 2.8228 | 25160 | 0.0001 | - |
| 2.8240 | 25170 | 0.0001 | - |
| 2.8251 | 25180 | 0.0001 | - |
| 2.8262 | 25190 | 0.0 | - |
| 2.8273 | 25200 | 0.0002 | - |
| 2.8285 | 25210 | 0.0 | - |
| 2.8296 | 25220 | 0.0003 | - |
| 2.8307 | 25230 | 0.0 | - |
| 2.8318 | 25240 | 0.0 | - |
| 2.8329 | 25250 | 0.0 | - |
| 2.8341 | 25260 | 0.0002 | - |
| 2.8352 | 25270 | 0.0 | - |
| 2.8363 | 25280 | 0.0001 | - |
| 2.8374 | 25290 | 0.0001 | - |
| 2.8386 | 25300 | 0.0001 | - |
| 2.8397 | 25310 | 0.0002 | - |
| 2.8408 | 25320 | 0.0002 | - |
| 2.8419 | 25330 | 0.0 | - |
| 2.8430 | 25340 | 0.0 | - |
| 2.8442 | 25350 | 0.0001 | - |
| 2.8453 | 25360 | 0.0 | - |
| 2.8464 | 25370 | 0.0 | - |
| 2.8475 | 25380 | 0.0001 | - |
| 2.8486 | 25390 | 0.0 | - |
| 2.8498 | 25400 | 0.0 | - |
| 2.8509 | 25410 | 0.0001 | - |
| 2.8520 | 25420 | 0.0001 | - |
| 2.8531 | 25430 | 0.0 | - |
| 2.8543 | 25440 | 0.0002 | - |
| 2.8554 | 25450 | 0.0002 | - |
| 2.8565 | 25460 | 0.0001 | - |
| 2.8576 | 25470 | 0.0001 | - |
| 2.8587 | 25480 | 0.0001 | - |
| 2.8599 | 25490 | 0.0001 | - |
| 2.8610 | 25500 | 0.0001 | - |
| 2.8621 | 25510 | 0.0001 | - |
| 2.8632 | 25520 | 0.0 | - |
| 2.8644 | 25530 | 0.0001 | - |
| 2.8655 | 25540 | 0.0 | - |
| 2.8666 | 25550 | 0.0 | - |
| 2.8677 | 25560 | 0.0 | - |
| 2.8688 | 25570 | 0.0 | - |
| 2.8700 | 25580 | 0.0001 | - |
| 2.8711 | 25590 | 0.0001 | - |
| 2.8722 | 25600 | 0.0 | - |
| 2.8733 | 25610 | 0.0001 | - |
| 2.8745 | 25620 | 0.0 | - |
| 2.8756 | 25630 | 0.0001 | - |
| 2.8767 | 25640 | 0.0001 | - |
| 2.8778 | 25650 | 0.0 | - |
| 2.8789 | 25660 | 0.0 | - |
| 2.8801 | 25670 | 0.0001 | - |
| 2.8812 | 25680 | 0.0003 | - |
| 2.8823 | 25690 | 0.0001 | - |
| 2.8834 | 25700 | 0.0 | - |
| 2.8846 | 25710 | 0.0001 | - |
| 2.8857 | 25720 | 0.0 | - |
| 2.8868 | 25730 | 0.0 | - |
| 2.8879 | 25740 | 0.0001 | - |
| 2.8890 | 25750 | 0.0002 | - |
| 2.8902 | 25760 | 0.0001 | - |
| 2.8913 | 25770 | 0.0 | - |
| 2.8924 | 25780 | 0.0002 | - |
| 2.8935 | 25790 | 0.0 | - |
| 2.8946 | 25800 | 0.0 | - |
| 2.8958 | 25810 | 0.0 | - |
| 2.8969 | 25820 | 0.0 | - |
| 2.8980 | 25830 | 0.0001 | - |
| 2.8991 | 25840 | 0.0 | - |
| 2.9003 | 25850 | 0.0001 | - |
| 2.9014 | 25860 | 0.0 | - |
| 2.9025 | 25870 | 0.0002 | - |
| 2.9036 | 25880 | 0.0001 | - |
| 2.9047 | 25890 | 0.0 | - |
| 2.9059 | 25900 | 0.0001 | - |
| 2.9070 | 25910 | 0.0 | - |
| 2.9081 | 25920 | 0.0001 | - |
| 2.9092 | 25930 | 0.0 | - |
| 2.9104 | 25940 | 0.0001 | - |
| 2.9115 | 25950 | 0.0 | - |
| 2.9126 | 25960 | 0.0001 | - |
| 2.9137 | 25970 | 0.0001 | - |
| 2.9148 | 25980 | 0.0001 | - |
| 2.9160 | 25990 | 0.0001 | - |
| 2.9171 | 26000 | 0.0 | - |
| 2.9182 | 26010 | 0.0 | - |
| 2.9193 | 26020 | 0.0001 | - |
| 2.9205 | 26030 | 0.0 | - |
| 2.9216 | 26040 | 0.0001 | - |
| 2.9227 | 26050 | 0.0001 | - |
| 2.9238 | 26060 | 0.0 | - |
| 2.9249 | 26070 | 0.0 | - |
| 2.9261 | 26080 | 0.0002 | - |
| 2.9272 | 26090 | 0.0001 | - |
| 2.9283 | 26100 | 0.0 | - |
| 2.9294 | 26110 | 0.0001 | - |
| 2.9306 | 26120 | 0.0 | - |
| 2.9317 | 26130 | 0.0001 | - |
| 2.9328 | 26140 | 0.0001 | - |
| 2.9339 | 26150 | 0.0001 | - |
| 2.9350 | 26160 | 0.0 | - |
| 2.9362 | 26170 | 0.0 | - |
| 2.9373 | 26180 | 0.0001 | - |
| 2.9384 | 26190 | 0.0001 | - |
| 2.9395 | 26200 | 0.0 | - |
| 2.9406 | 26210 | 0.0001 | - |
| 2.9418 | 26220 | 0.0 | - |
| 2.9429 | 26230 | 0.0 | - |
| 2.9440 | 26240 | 0.0001 | - |
| 2.9451 | 26250 | 0.0 | - |
| 2.9463 | 26260 | 0.0 | - |
| 2.9474 | 26270 | 0.0001 | - |
| 2.9485 | 26280 | 0.0 | - |
| 2.9496 | 26290 | 0.0 | - |
| 2.9507 | 26300 | 0.0003 | - |
| 2.9519 | 26310 | 0.0 | - |
| 2.9530 | 26320 | 0.0 | - |
| 2.9541 | 26330 | 0.0001 | - |
| 2.9552 | 26340 | 0.0 | - |
| 2.9564 | 26350 | 0.0001 | - |
| 2.9575 | 26360 | 0.0001 | - |
| 2.9586 | 26370 | 0.0 | - |
| 2.9597 | 26380 | 0.0001 | - |
| 2.9608 | 26390 | 0.0 | - |
| 2.9620 | 26400 | 0.0001 | - |
| 2.9631 | 26410 | 0.0003 | - |
| 2.9642 | 26420 | 0.0 | - |
| 2.9653 | 26430 | 0.0 | - |
| 2.9665 | 26440 | 0.0 | - |
| 2.9676 | 26450 | 0.0001 | - |
| 2.9687 | 26460 | 0.0001 | - |
| 2.9698 | 26470 | 0.0 | - |
| 2.9709 | 26480 | 0.0 | - |
| 2.9721 | 26490 | 0.0001 | - |
| 2.9732 | 26500 | 0.0001 | - |
| 2.9743 | 26510 | 0.0002 | - |
| 2.9754 | 26520 | 0.0 | - |
| 2.9766 | 26530 | 0.0 | - |
| 2.9777 | 26540 | 0.0001 | - |
| 2.9788 | 26550 | 0.0003 | - |
| 2.9799 | 26560 | 0.0001 | - |
| 2.9810 | 26570 | 0.0 | - |
| 2.9822 | 26580 | 0.0 | - |
| 2.9833 | 26590 | 0.0 | - |
| 2.9844 | 26600 | 0.0001 | - |
| 2.9855 | 26610 | 0.0 | - |
| 2.9866 | 26620 | 0.0001 | - |
| 2.9878 | 26630 | 0.0 | - |
| 2.9889 | 26640 | 0.0001 | - |
| 2.9900 | 26650 | 0.0001 | - |
| 2.9911 | 26660 | 0.0001 | - |
| 2.9923 | 26670 | 0.0 | - |
| 2.9934 | 26680 | 0.0 | - |
| 2.9945 | 26690 | 0.0 | - |
| 2.9956 | 26700 | 0.0001 | - |
| 2.9967 | 26710 | 0.0 | - |
| 2.9979 | 26720 | 0.0 | - |
| 2.9990 | 26730 | 0.0 | - |
| 3.0 | 26739 | - | 0.0346 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- Transformers: 4.37.0
- PyTorch: 2.4.1+cu121
- Datasets: 3.0.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 34 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| request_asset_utilization_report | <ul><li>'How can I request a utilization report for the projector?'</li><li>"Comment puis-je demander un rapport d'utilisation pour le projecteur?"</li><li>'How do I request a usage report for the printer?'</li></ul> |
| request_asset_transfer | <ul><li>'What’s the procedure for transferring the forklift?'</li><li>'Comment initier une demande de transfert pour la voiture?'</li><li>'What are the steps to transfer the generator?'</li></ul> |
| check_asset_functionality | <ul><li>'Can you check if the laptop is operating correctly?'</li><li>'Pouvez-vous vérifier la fonctionnalité du chariot élévateur?'</li><li>"Quelle est la performance de l'imprimante?"</li></ul> |
| request_asset_status | <ul><li>"Pouvez-vous fournir le rapport de statut de l'imprimante?"</li><li>'Can you provide the status report for the printer?'</li><li>'Can you check the current status of the server?'</li></ul> |
| request_asset_repair | <ul><li>'What’s the procedure for getting the server repaired?'</li><li>'How can I initiate a repair for the laptop?'</li><li>"Pouvez-vous m'aider avec la demande de réparation pour le générateur?"</li></ul> |
| check_asset_certification | <ul><li>"Quel est le statut actuel de la certification de l'imprimante?"</li><li>'Can you verify the certification status of the forklift?'</li><li>'Pouvez-vous vérifier le statut de certification du chariot élévateur?'</li></ul> |
| check_asset_security_status | <ul><li>"Pouvez-vous vérifier le statut de sécurité de l'imprimante?"</li><li>"Comment la sécurité de l'ordinateur portable est-elle gérée?"</li><li>'Can you check the security status of the printer?'</li></ul> |
| check_asset_upgrades | <ul><li>'Can you check if the laptop has been upgraded recently?'</li><li>"Quelles mises à jour ont été appliquées à l'imprimante?"</li><li>'À quel point le projecteur est-il à jour avec ses mises à jour?'</li></ul> |
| request_asset_inventory_update | <ul><li>'Has the inventory for the air conditioner been updated recently?'</li><li>'What’s the latest inventory status for the generator?'</li><li>"Comment puis-je demander une mise à jour pour l'inventaire du serveur?"</li></ul> |
| check_asset_location_update | <ul><li>'Pouvez-vous fournir les derniers détails de localisation pour le générateur?'</li><li>"Quelle est la localisation actuelle de l'avion?"</li><li>'Pouvez-vous vérifier si la localisation du serveur a changé?'</li></ul> |
| report_asset_issue | <ul><li>'How can I report a problem with the generator?'</li><li>"Pouvez-vous m'aider à signaler un problème avec l'avion?"</li><li>'Comment puis-je signaler un problème avec le générateur?'</li></ul> |
| request_work_order | <ul><li>'Comment puis-je initier un ordre de travail pour le climatiseur?'</li><li>'What’s the process for creating a work order for the forklift?'</li><li>'How do I request a work order for the generator?'</li></ul> |
| request_maintenance_schedule | <ul><li>'Quand est prévu le prochain entretien pour la voiture?'</li><li>'When is the next maintenance for the car scheduled?'</li><li>"Pouvez-vous me donner le calendrier de maintenance de l'avion?"</li></ul> |
| greet-hi | <ul><li>'Bonsoir'</li><li>'Salut'</li><li>'Hey'</li></ul> |
| check_asset_maintenance_history | <ul><li>"Pouvez-vous me donner un résumé de l'historique de maintenance du projecteur?"</li><li>'What’s the maintenance history like for the forklift?'</li><li>'Quel est le dossier de maintenance du serveur?'</li></ul> |
| request_asset_depreciation_info | <ul><li>'Can I get a report on the projector’s depreciation?'</li><li>"Quel est le statut de dépréciation de l'ordinateur?"</li><li>"Quelle est la dépréciation actuelle de l'avion?"</li></ul> |
| request_asset_funding_status | <ul><li>"Comment le financement de l'ordinateur portable est-il géré?"</li><li>'Le chariot élévateur est-il financé?'</li><li>'Can you give me details on the funding status for the server?'</li></ul> |
| check_asset_usage_history | <ul><li>"Pouvez-vous me donner un rapport sur l'historique d'utilisation du serveur?"</li><li>'What’s the historical usage data for the forklift?'</li><li>"Quelles sont les données historiques d'utilisation du chariot élévateur?"</li></ul> |
| request_asset_inventory | <ul><li>'What’s the current inventory status of the car?'</li><li>'What’s the inventory record for the airplane?'</li><li>"Pouvez-vous me donner le rapport d'inventaire pour le générateur?"</li></ul> |
| request_asset_location | <ul><li>"Où puis-je trouver l'avion?"</li><li>'Pouvez-vous me dire où se trouve la voiture?'</li><li>'Where is the projector situated now?'</li></ul> |
| request_asset_upgrades | <ul><li>'How do I apply for an upgrade for the forklift?'</li><li>"Quelle est la procédure pour mettre à niveau l'ordinateur portable?"</li><li>'Quelles étapes sont nécessaires pour demander une mise à niveau pour le générateur?'</li></ul> |
| request_asset_maintenance_history | <ul><li>'What’s the historical maintenance data for the printer?'</li><li>"Comment puis-je accéder à l'historique de maintenance du serveur?"</li><li>'Pouvez-vous me donner un résumé de la maintenance du chariot élévateur?'</li></ul> |
| check_asset_warranty | <ul><li>'How long is the generator covered by warranty?'</li><li>"Pouvez-vous vérifier le statut de garantie de l'ordinateur portable?"</li><li>'Quelle est la période de garantie actuelle du chariot élévateur?'</li></ul> |
| greet-who_are_you | <ul><li>"Aidez-moi, s'il vous plaît"</li><li>"comment pouvez vous m'aider"</li><li>'how can you help me'</li></ul> |
| request_asset_replacement_info | <ul><li>'What are the steps involved in replacing the forklift?'</li><li>"Comment puis-je obtenir des détails sur le remplacement de l'imprimante?"</li><li>'Pouvez-vous me guider à travers le processus de remplacement du climatiseur?'</li></ul> |
| request_asset_replacement | <ul><li>"Quelle est la procédure pour remplacer l'ordinateur portable?"</li><li>'Can you help me with replacing the printer?'</li><li>'Je dois remplacer le chariot élévateur; quelles étapes dois-je suivre?'</li></ul> |
| check_asset_availability | <ul><li>'Puis-je utiliser la voiture en ce moment?'</li><li>'Pouvez-vous vérifier la disponibilité du chariot élévateur?'</li><li>'Is the laptop free for use at the moment?'</li></ul> |
| check_asset_history | <ul><li>'What’s the historical usage data for the projector?'</li><li>"Quelles sont les données d'utilisation historique du projecteur?"</li><li>"Comment puis-je accéder à l'historique du serveur?"</li></ul> |
| check_asset_compatibility | <ul><li>"Pouvez-vous vérifier si le serveur s'intègre bien avec le système?"</li><li>'Pouvez-vous fournir des détails sur la compatibilité du générateur avec le système?'</li><li>"Quel est le statut de compatibilité de l'imprimante?"</li></ul> |
| request_asset_efficiency | <ul><li>"Quel est le niveau d'efficacité actuel du serveur?"</li><li>'What’s the current efficiency level of the server?'</li><li>"Quelle est l'efficacité du générateur dans ses opérations?"</li></ul> |
| request_asset_information | <ul><li>'Pouvez-vous me donner un résumé des détails du générateur?'</li><li>"J'ai besoin d'informations sur l'ordinateur; pouvez-vous m'aider?"</li><li>'Can you tell me more about the tablet?'</li></ul> |
| request_asset_cost_analysis | <ul><li>"Combien coûte l'entretien de la voiture?"</li><li>'What are the expenses associated with the laptop?'</li><li>"Pouvez-vous me donner un rapport détaillé des coûts pour l'imprimante?"</li></ul> |
| check_asset_support_contract | <ul><li>"Pouvez-vous vérifier le statut du contrat de support pour l'imprimante?"</li><li>'Can you verify the support contract status for the printer?'</li><li>"Quel est le statut du contrat de support de l'ordinateur portable?"</li></ul> |
| greet-good_bye | <ul><li>'stop'</li><li>'exit'</li><li>'A plus tard'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8897 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Decius/botpress_Vaganet_new_model")
# Run inference
preds = model("L'ordinateur portable est-il libre à l'instant?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 8.3494 | 13 |
| Label | Training Sample Count |
|:----------------------------------|:----------------------|
| greet-hi | 3 |
| greet-who_are_you | 5 |
| greet-good_bye | 3 |
| request_work_order | 5 |
| request_asset_utilization_report | 5 |
| request_asset_information | 5 |
| request_asset_location | 5 |
| check_asset_availability | 5 |
| report_asset_issue | 5 |
| request_maintenance_schedule | 5 |
| request_asset_depreciation_info | 5 |
| request_asset_status | 5 |
| request_asset_replacement | 5 |
| request_asset_inventory | 5 |
| request_asset_upgrades | 5 |
| check_asset_warranty | 5 |
| request_asset_transfer | 5 |
| check_asset_history | 5 |
| request_asset_repair | 5 |
| check_asset_functionality | 5 |
| request_asset_maintenance_history | 5 |
| request_asset_replacement_info | 5 |
| request_asset_cost_analysis | 5 |
| check_asset_compatibility | 5 |
| request_asset_efficiency | 5 |
| check_asset_certification | 5 |
| check_asset_location_update | 5 |
| request_asset_inventory_update | 5 |
| check_asset_usage_history | 5 |
| check_asset_upgrades | 5 |
| check_asset_security_status | 5 |
| check_asset_support_contract | 5 |
| request_asset_funding_status | 5 |
| check_asset_maintenance_history | 5 |
### Training Hyperparameters
- batch_size: (3, 3)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.0936 | - |
| 0.0011 | 10 | 0.274 | - |
| 0.0022 | 20 | 0.1268 | - |
| 0.0034 | 30 | 0.1438 | - |
| 0.0045 | 40 | 0.0725 | - |
| 0.0056 | 50 | 0.0259 | - |
| 0.0067 | 60 | 0.0466 | - |
| 0.0079 | 70 | 0.094 | - |
| 0.0090 | 80 | 0.1856 | - |
| 0.0101 | 90 | 0.0431 | - |
| 0.0112 | 100 | 0.1027 | - |
| 0.0123 | 110 | 0.182 | - |
| 0.0135 | 120 | 0.1333 | - |
| 0.0146 | 130 | 0.0891 | - |
| 0.0157 | 140 | 0.1653 | - |
| 0.0168 | 150 | 0.1792 | - |
| 0.0180 | 160 | 0.0491 | - |
| 0.0191 | 170 | 0.1128 | - |
| 0.0202 | 180 | 0.2192 | - |
| 0.0213 | 190 | 0.1811 | - |
| 0.0224 | 200 | 0.0586 | - |
| 0.0236 | 210 | 0.0236 | - |
| 0.0247 | 220 | 0.0593 | - |
| 0.0258 | 230 | 0.1262 | - |
| 0.0269 | 240 | 0.1643 | - |
| 0.0280 | 250 | 0.0971 | - |
| 0.0292 | 260 | 0.03 | - |
| 0.0303 | 270 | 0.2052 | - |
| 0.0314 | 280 | 0.1401 | - |
| 0.0325 | 290 | 0.0235 | - |
| 0.0337 | 300 | 0.1132 | - |
| 0.0348 | 310 | 0.2753 | - |
| 0.0359 | 320 | 0.0491 | - |
| 0.0370 | 330 | 0.1902 | - |
| 0.0381 | 340 | 0.1143 | - |
| 0.0393 | 350 | 0.0437 | - |
| 0.0404 | 360 | 0.1819 | - |
| 0.0415 | 370 | 0.0539 | - |
| 0.0426 | 380 | 0.0256 | - |
| 0.0438 | 390 | 0.1176 | - |
| 0.0449 | 400 | 0.0329 | - |
| 0.0460 | 410 | 0.0766 | - |
| 0.0471 | 420 | 0.0577 | - |
| 0.0482 | 430 | 0.1098 | - |
| 0.0494 | 440 | 0.0153 | - |
| 0.0505 | 450 | 0.0418 | - |
| 0.0516 | 460 | 0.0127 | - |
| 0.0527 | 470 | 0.0593 | - |
| 0.0539 | 480 | 0.0538 | - |
| 0.0550 | 490 | 0.0405 | - |
| 0.0561 | 500 | 0.0062 | - |
| 0.0572 | 510 | 0.0994 | - |
| 0.0583 | 520 | 0.0926 | - |
| 0.0595 | 530 | 0.0923 | - |
| 0.0606 | 540 | 0.0715 | - |
| 0.0617 | 550 | 0.0653 | - |
| 0.0628 | 560 | 0.1877 | - |
| 0.0640 | 570 | 0.0069 | - |
| 0.0651 | 580 | 0.0085 | - |
| 0.0662 | 590 | 0.0714 | - |
| 0.0673 | 600 | 0.0845 | - |
| 0.0684 | 610 | 0.019 | - |
| 0.0696 | 620 | 0.0186 | - |
| 0.0707 | 630 | 0.1454 | - |
| 0.0718 | 640 | 0.0121 | - |
| 0.0729 | 650 | 0.1109 | - |
| 0.0740 | 660 | 0.0108 | - |
| 0.0752 | 670 | 0.0224 | - |
| 0.0763 | 680 | 0.0024 | - |
| 0.0774 | 690 | 0.0313 | - |
| 0.0785 | 700 | 0.0335 | - |
| 0.0797 | 710 | 0.0097 | - |
| 0.0808 | 720 | 0.1469 | - |
| 0.0819 | 730 | 0.0245 | - |
| 0.0830 | 740 | 0.0264 | - |
| 0.0841 | 750 | 0.0121 | - |
| 0.0853 | 760 | 0.1861 | - |
| 0.0864 | 770 | 0.0648 | - |
| 0.0875 | 780 | 0.005 | - |
| 0.0886 | 790 | 0.0033 | - |
| 0.0898 | 800 | 0.0543 | - |
| 0.0909 | 810 | 0.0129 | - |
| 0.0920 | 820 | 0.0049 | - |
| 0.0931 | 830 | 0.0389 | - |
| 0.0942 | 840 | 0.0271 | - |
| 0.0954 | 850 | 0.0298 | - |
| 0.0965 | 860 | 0.0434 | - |
| 0.0976 | 870 | 0.0359 | - |
| 0.0987 | 880 | 0.0019 | - |
| 0.0999 | 890 | 0.034 | - |
| 0.1010 | 900 | 0.0171 | - |
| 0.1021 | 910 | 0.0033 | - |
| 0.1032 | 920 | 0.021 | - |
| 0.1043 | 930 | 0.0479 | - |
| 0.1055 | 940 | 0.0147 | - |
| 0.1066 | 950 | 0.0101 | - |
| 0.1077 | 960 | 0.0315 | - |
| 0.1088 | 970 | 0.0064 | - |
| 0.1100 | 980 | 0.0146 | - |
| 0.1111 | 990 | 0.0062 | - |
| 0.1122 | 1000 | 0.0063 | - |
| 0.1133 | 1010 | 0.0171 | - |
| 0.1144 | 1020 | 0.0474 | - |
| 0.1156 | 1030 | 0.003 | - |
| 0.1167 | 1040 | 0.0059 | - |
| 0.1178 | 1050 | 0.0063 | - |
| 0.1189 | 1060 | 0.0107 | - |
| 0.1200 | 1070 | 0.0344 | - |
| 0.1212 | 1080 | 0.0181 | - |
| 0.1223 | 1090 | 0.0354 | - |
| 0.1234 | 1100 | 0.0164 | - |
| 0.1245 | 1110 | 0.0181 | - |
| 0.1257 | 1120 | 0.0025 | - |
| 0.1268 | 1130 | 0.0087 | - |
| 0.1279 | 1140 | 0.0132 | - |
| 0.1290 | 1150 | 0.0047 | - |
| 0.1301 | 1160 | 0.0058 | - |
| 0.1313 | 1170 | 0.0069 | - |
| 0.1324 | 1180 | 0.0004 | - |
| 0.1335 | 1190 | 0.002 | - |
| 0.1346 | 1200 | 0.001 | - |
| 0.1358 | 1210 | 0.0017 | - |
| 0.1369 | 1220 | 0.0021 | - |
| 0.1380 | 1230 | 0.0283 | - |
| 0.1391 | 1240 | 0.014 | - |
| 0.1402 | 1250 | 0.0029 | - |
| 0.1414 | 1260 | 0.0044 | - |
| 0.1425 | 1270 | 0.0084 | - |
| 0.1436 | 1280 | 0.0186 | - |
| 0.1447 | 1290 | 0.0021 | - |
| 0.1459 | 1300 | 0.0022 | - |
| 0.1470 | 1310 | 0.0403 | - |
| 0.1481 | 1320 | 0.0385 | - |
| 0.1492 | 1330 | 0.0007 | - |
| 0.1503 | 1340 | 0.004 | - |
| 0.1515 | 1350 | 0.0058 | - |
| 0.1526 | 1360 | 0.0027 | - |
| 0.1537 | 1370 | 0.0037 | - |
| 0.1548 | 1380 | 0.0024 | - |
| 0.1560 | 1390 | 0.0008 | - |
| 0.1571 | 1400 | 0.0654 | - |
| 0.1582 | 1410 | 0.022 | - |
| 0.1593 | 1420 | 0.1093 | - |
| 0.1604 | 1430 | 0.0074 | - |
| 0.1616 | 1440 | 0.0482 | - |
| 0.1627 | 1450 | 0.0065 | - |
| 0.1638 | 1460 | 0.0061 | - |
| 0.1649 | 1470 | 0.0085 | - |
| 0.1660 | 1480 | 0.2072 | - |
| 0.1672 | 1490 | 0.0018 | - |
| 0.1683 | 1500 | 0.0039 | - |
| 0.1694 | 1510 | 0.0044 | - |
| 0.1705 | 1520 | 0.0008 | - |
| 0.1717 | 1530 | 0.0033 | - |
| 0.1728 | 1540 | 0.0025 | - |
| 0.1739 | 1550 | 0.0015 | - |
| 0.1750 | 1560 | 0.0014 | - |
| 0.1761 | 1570 | 0.0009 | - |
| 0.1773 | 1580 | 0.0181 | - |
| 0.1784 | 1590 | 0.0161 | - |
| 0.1795 | 1600 | 0.0689 | - |
| 0.1806 | 1610 | 0.0035 | - |
| 0.1818 | 1620 | 0.008 | - |
| 0.1829 | 1630 | 0.0043 | - |
| 0.1840 | 1640 | 0.0114 | - |
| 0.1851 | 1650 | 0.0804 | - |
| 0.1862 | 1660 | 0.0073 | - |
| 0.1874 | 1670 | 0.0043 | - |
| 0.1885 | 1680 | 0.0038 | - |
| 0.1896 | 1690 | 0.0041 | - |
| 0.1907 | 1700 | 0.0027 | - |
| 0.1919 | 1710 | 0.0014 | - |
| 0.1930 | 1720 | 0.001 | - |
| 0.1941 | 1730 | 0.0013 | - |
| 0.1952 | 1740 | 0.0137 | - |
| 0.1963 | 1750 | 0.0006 | - |
| 0.1975 | 1760 | 0.0025 | - |
| 0.1986 | 1770 | 0.0098 | - |
| 0.1997 | 1780 | 0.0079 | - |
| 0.2008 | 1790 | 0.0031 | - |
| 0.2020 | 1800 | 0.0029 | - |
| 0.2031 | 1810 | 0.0117 | - |
| 0.2042 | 1820 | 0.0012 | - |
| 0.2053 | 1830 | 0.0108 | - |
| 0.2064 | 1840 | 0.0882 | - |
| 0.2076 | 1850 | 0.0073 | - |
| 0.2087 | 1860 | 0.0116 | - |
| 0.2098 | 1870 | 0.1268 | - |
| 0.2109 | 1880 | 0.0107 | - |
| 0.2120 | 1890 | 0.0047 | - |
| 0.2132 | 1900 | 0.0353 | - |
| 0.2143 | 1910 | 0.0018 | - |
| 0.2154 | 1920 | 0.2147 | - |
| 0.2165 | 1930 | 0.0041 | - |
| 0.2177 | 1940 | 0.0202 | - |
| 0.2188 | 1950 | 0.006 | - |
| 0.2199 | 1960 | 0.0007 | - |
| 0.2210 | 1970 | 0.0013 | - |
| 0.2221 | 1980 | 0.0026 | - |
| 0.2233 | 1990 | 0.0522 | - |
| 0.2244 | 2000 | 0.0586 | - |
| 0.2255 | 2010 | 0.0045 | - |
| 0.2266 | 2020 | 0.003 | - |
| 0.2278 | 2030 | 0.0008 | - |
| 0.2289 | 2040 | 0.0029 | - |
| 0.2300 | 2050 | 0.0024 | - |
| 0.2311 | 2060 | 0.0019 | - |
| 0.2322 | 2070 | 0.0015 | - |
| 0.2334 | 2080 | 0.0012 | - |
| 0.2345 | 2090 | 0.0012 | - |
| 0.2356 | 2100 | 0.0002 | - |
| 0.2367 | 2110 | 0.0033 | - |
| 0.2379 | 2120 | 0.001 | - |
| 0.2390 | 2130 | 0.0015 | - |
| 0.2401 | 2140 | 0.0094 | - |
| 0.2412 | 2150 | 0.0025 | - |
| 0.2423 | 2160 | 0.0021 | - |
| 0.2435 | 2170 | 0.0015 | - |
| 0.2446 | 2180 | 0.0063 | - |
| 0.2457 | 2190 | 0.0051 | - |
| 0.2468 | 2200 | 0.0038 | - |
| 0.2480 | 2210 | 0.0036 | - |
| 0.2491 | 2220 | 0.0007 | - |
| 0.2502 | 2230 | 0.0038 | - |
| 0.2513 | 2240 | 0.0545 | - |
| 0.2524 | 2250 | 0.0027 | - |
| 0.2536 | 2260 | 0.0032 | - |
| 0.2547 | 2270 | 0.0017 | - |
| 0.2558 | 2280 | 0.0073 | - |
| 0.2569 | 2290 | 0.0036 | - |
| 0.2581 | 2300 | 0.001 | - |
| 0.2592 | 2310 | 0.0005 | - |
| 0.2603 | 2320 | 0.0024 | - |
| 0.2614 | 2330 | 0.0019 | - |
| 0.2625 | 2340 | 0.0113 | - |
| 0.2637 | 2350 | 0.1202 | - |
| 0.2648 | 2360 | 0.2482 | - |
| 0.2659 | 2370 | 0.0047 | - |
| 0.2670 | 2380 | 0.0056 | - |
| 0.2681 | 2390 | 0.0037 | - |
| 0.2693 | 2400 | 0.0031 | - |
| 0.2704 | 2410 | 0.3122 | - |
| 0.2715 | 2420 | 0.0371 | - |
| 0.2726 | 2430 | 0.0017 | - |
| 0.2738 | 2440 | 0.0109 | - |
| 0.2749 | 2450 | 0.0011 | - |
| 0.2760 | 2460 | 0.0017 | - |
| 0.2771 | 2470 | 0.0033 | - |
| 0.2782 | 2480 | 0.007 | - |
| 0.2794 | 2490 | 0.0031 | - |
| 0.2805 | 2500 | 0.0076 | - |
| 0.2816 | 2510 | 0.0008 | - |
| 0.2827 | 2520 | 0.0091 | - |
| 0.2839 | 2530 | 0.0023 | - |
| 0.2850 | 2540 | 0.0057 | - |
| 0.2861 | 2550 | 0.0049 | - |
| 0.2872 | 2560 | 0.002 | - |
| 0.2883 | 2570 | 0.2556 | - |
| 0.2895 | 2580 | 0.0065 | - |
| 0.2906 | 2590 | 0.0073 | - |
| 0.2917 | 2600 | 0.0031 | - |
| 0.2928 | 2610 | 0.0047 | - |
| 0.2940 | 2620 | 0.0009 | - |
| 0.2951 | 2630 | 0.005 | - |
| 0.2962 | 2640 | 0.2062 | - |
| 0.2973 | 2650 | 0.0102 | - |
| 0.2984 | 2660 | 0.0006 | - |
| 0.2996 | 2670 | 0.0028 | - |
| 0.3007 | 2680 | 0.0015 | - |
| 0.3018 | 2690 | 0.0054 | - |
| 0.3029 | 2700 | 0.0038 | - |
| 0.3041 | 2710 | 0.0014 | - |
| 0.3052 | 2720 | 0.0021 | - |
| 0.3063 | 2730 | 0.0041 | - |
| 0.3074 | 2740 | 0.0058 | - |
| 0.3085 | 2750 | 0.0021 | - |
| 0.3097 | 2760 | 0.3074 | - |
| 0.3108 | 2770 | 0.2003 | - |
| 0.3119 | 2780 | 0.0015 | - |
| 0.3130 | 2790 | 0.001 | - |
| 0.3141 | 2800 | 0.0026 | - |
| 0.3153 | 2810 | 0.0014 | - |
| 0.3164 | 2820 | 0.0071 | - |
| 0.3175 | 2830 | 0.0016 | - |
| 0.3186 | 2840 | 0.004 | - |
| 0.3198 | 2850 | 0.0083 | - |
| 0.3209 | 2860 | 0.0003 | - |
| 0.3220 | 2870 | 0.0023 | - |
| 0.3231 | 2880 | 0.0035 | - |
| 0.3242 | 2890 | 0.0005 | - |
| 0.3254 | 2900 | 0.0046 | - |
| 0.3265 | 2910 | 0.001 | - |
| 0.3276 | 2920 | 0.0022 | - |
| 0.3287 | 2930 | 0.0013 | - |
| 0.3299 | 2940 | 0.0017 | - |
| 0.3310 | 2950 | 0.0013 | - |
| 0.3321 | 2960 | 0.0059 | - |
| 0.3332 | 2970 | 0.0015 | - |
| 0.3343 | 2980 | 0.0027 | - |
| 0.3355 | 2990 | 0.0054 | - |
| 0.3366 | 3000 | 0.0026 | - |
| 0.3377 | 3010 | 0.0045 | - |
| 0.3388 | 3020 | 0.0061 | - |
| 0.3400 | 3030 | 0.0147 | - |
| 0.3411 | 3040 | 0.0035 | - |
| 0.3422 | 3050 | 0.002 | - |
| 0.3433 | 3060 | 0.0012 | - |
| 0.3444 | 3070 | 0.0058 | - |
| 0.3456 | 3080 | 0.0014 | - |
| 0.3467 | 3090 | 0.0001 | - |
| 0.3478 | 3100 | 0.3175 | - |
| 0.3489 | 3110 | 0.0022 | - |
| 0.3501 | 3120 | 0.0001 | - |
| 0.3512 | 3130 | 0.0013 | - |
| 0.3523 | 3140 | 0.0016 | - |
| 0.3534 | 3150 | 0.0005 | - |
| 0.3545 | 3160 | 0.1234 | - |
| 0.3557 | 3170 | 0.0005 | - |
| 0.3568 | 3180 | 0.0002 | - |
| 0.3579 | 3190 | 0.0083 | - |
| 0.3590 | 3200 | 0.0011 | - |
| 0.3601 | 3210 | 0.0009 | - |
| 0.3613 | 3220 | 0.0007 | - |
| 0.3624 | 3230 | 0.0006 | - |
| 0.3635 | 3240 | 0.0016 | - |
| 0.3646 | 3250 | 0.0009 | - |
| 0.3658 | 3260 | 0.0045 | - |
| 0.3669 | 3270 | 0.0136 | - |
| 0.3680 | 3280 | 0.0055 | - |
| 0.3691 | 3290 | 0.0004 | - |
| 0.3702 | 3300 | 0.0014 | - |
| 0.3714 | 3310 | 0.001 | - |
| 0.3725 | 3320 | 0.0007 | - |
| 0.3736 | 3330 | 0.0002 | - |
| 0.3747 | 3340 | 0.0009 | - |
| 0.3759 | 3350 | 0.0003 | - |
| 0.3770 | 3360 | 0.0006 | - |
| 0.3781 | 3370 | 0.0006 | - |
| 0.3792 | 3380 | 0.0009 | - |
| 0.3803 | 3390 | 0.0008 | - |
| 0.3815 | 3400 | 0.0028 | - |
| 0.3826 | 3410 | 0.0039 | - |
| 0.3837 | 3420 | 0.0017 | - |
| 0.3848 | 3430 | 0.001 | - |
| 0.3860 | 3440 | 0.0012 | - |
| 0.3871 | 3450 | 0.0212 | - |
| 0.3882 | 3460 | 0.0017 | - |
| 0.3893 | 3470 | 0.0018 | - |
| 0.3904 | 3480 | 0.0007 | - |
| 0.3916 | 3490 | 0.0012 | - |
| 0.3927 | 3500 | 0.0002 | - |
| 0.3938 | 3510 | 0.0836 | - |
| 0.3949 | 3520 | 0.0054 | - |
| 0.3961 | 3530 | 0.0016 | - |
| 0.3972 | 3540 | 0.0003 | - |
| 0.3983 | 3550 | 0.0038 | - |
| 0.3994 | 3560 | 0.0024 | - |
| 0.4005 | 3570 | 0.0057 | - |
| 0.4017 | 3580 | 0.0804 | - |
| 0.4028 | 3590 | 0.0006 | - |
| 0.4039 | 3600 | 0.0008 | - |
| 0.4050 | 3610 | 0.0007 | - |
| 0.4061 | 3620 | 0.0002 | - |
| 0.4073 | 3630 | 0.0001 | - |
| 0.4084 | 3640 | 0.0032 | - |
| 0.4095 | 3650 | 0.0015 | - |
| 0.4106 | 3660 | 0.0002 | - |
| 0.4118 | 3670 | 0.0015 | - |
| 0.4129 | 3680 | 0.0014 | - |
| 0.4140 | 3690 | 0.0001 | - |
| 0.4151 | 3700 | 0.0033 | - |
| 0.4162 | 3710 | 0.0021 | - |
| 0.4174 | 3720 | 0.0009 | - |
| 0.4185 | 3730 | 0.0012 | - |
| 0.4196 | 3740 | 0.0022 | - |
| 0.4207 | 3750 | 0.0007 | - |
| 0.4219 | 3760 | 0.0034 | - |
| 0.4230 | 3770 | 0.0087 | - |
| 0.4241 | 3780 | 0.0032 | - |
| 0.4252 | 3790 | 0.0017 | - |
| 0.4263 | 3800 | 0.0042 | - |
| 0.4275 | 3810 | 0.0011 | - |
| 0.4286 | 3820 | 0.0034 | - |
| 0.4297 | 3830 | 0.0003 | - |
| 0.4308 | 3840 | 0.002 | - |
| 0.4320 | 3850 | 0.0055 | - |
| 0.4331 | 3860 | 0.0013 | - |
| 0.4342 | 3870 | 0.005 | - |
| 0.4353 | 3880 | 0.0011 | - |
| 0.4364 | 3890 | 0.0054 | - |
| 0.4376 | 3900 | 0.0013 | - |
| 0.4387 | 3910 | 0.0014 | - |
| 0.4398 | 3920 | 0.0006 | - |
| 0.4409 | 3930 | 0.0017 | - |
| 0.4421 | 3940 | 0.0015 | - |
| 0.4432 | 3950 | 0.0006 | - |
| 0.4443 | 3960 | 0.0029 | - |
| 0.4454 | 3970 | 0.0013 | - |
| 0.4465 | 3980 | 0.0021 | - |
| 0.4477 | 3990 | 0.001 | - |
| 0.4488 | 4000 | 0.0023 | - |
| 0.4499 | 4010 | 0.0016 | - |
| 0.4510 | 4020 | 0.0025 | - |
| 0.4521 | 4030 | 0.0008 | - |
| 0.4533 | 4040 | 0.0014 | - |
| 0.4544 | 4050 | 0.0006 | - |
| 0.4555 | 4060 | 0.0013 | - |
| 0.4566 | 4070 | 0.0002 | - |
| 0.4578 | 4080 | 0.0021 | - |
| 0.4589 | 4090 | 0.0309 | - |
| 0.4600 | 4100 | 0.0026 | - |
| 0.4611 | 4110 | 0.0011 | - |
| 0.4622 | 4120 | 0.0025 | - |
| 0.4634 | 4130 | 0.0021 | - |
| 0.4645 | 4140 | 0.003 | - |
| 0.4656 | 4150 | 0.0026 | - |
| 0.4667 | 4160 | 0.0001 | - |
| 0.4679 | 4170 | 0.0019 | - |
| 0.4690 | 4180 | 0.1177 | - |
| 0.4701 | 4190 | 0.0008 | - |
| 0.4712 | 4200 | 0.0016 | - |
| 0.4723 | 4210 | 0.0013 | - |
| 0.4735 | 4220 | 0.0007 | - |
| 0.4746 | 4230 | 0.0021 | - |
| 0.4757 | 4240 | 0.0004 | - |
| 0.4768 | 4250 | 0.3107 | - |
| 0.4780 | 4260 | 0.0014 | - |
| 0.4791 | 4270 | 0.0089 | - |
| 0.4802 | 4280 | 0.0165 | - |
| 0.4813 | 4290 | 0.0091 | - |
| 0.4824 | 4300 | 0.0007 | - |
| 0.4836 | 4310 | 0.0044 | - |
| 0.4847 | 4320 | 0.3151 | - |
| 0.4858 | 4330 | 0.0002 | - |
| 0.4869 | 4340 | 0.0058 | - |
| 0.4881 | 4350 | 0.0014 | - |
| 0.4892 | 4360 | 0.0657 | - |
| 0.4903 | 4370 | 0.0015 | - |
| 0.4914 | 4380 | 0.0015 | - |
| 0.4925 | 4390 | 0.0029 | - |
| 0.4937 | 4400 | 0.0046 | - |
| 0.4948 | 4410 | 0.2228 | - |
| 0.4959 | 4420 | 0.0011 | - |
| 0.4970 | 4430 | 0.0135 | - |
| 0.4981 | 4440 | 0.0017 | - |
| 0.4993 | 4450 | 0.0007 | - |
| 0.5004 | 4460 | 0.0022 | - |
| 0.5015 | 4470 | 0.001 | - |
| 0.5026 | 4480 | 0.0044 | - |
| 0.5038 | 4490 | 0.0002 | - |
| 0.5049 | 4500 | 0.0002 | - |
| 0.5060 | 4510 | 0.0013 | - |
| 0.5071 | 4520 | 0.0042 | - |
| 0.5082 | 4530 | 0.0007 | - |
| 0.5094 | 4540 | 0.0012 | - |
| 0.5105 | 4550 | 0.0009 | - |
| 0.5116 | 4560 | 0.001 | - |
| 0.5127 | 4570 | 0.0028 | - |
| 0.5139 | 4580 | 0.0016 | - |
| 0.5150 | 4590 | 0.0017 | - |
| 0.5161 | 4600 | 0.0029 | - |
| 0.5172 | 4610 | 0.0003 | - |
| 0.5183 | 4620 | 0.001 | - |
| 0.5195 | 4630 | 0.0008 | - |
| 0.5206 | 4640 | 0.0011 | - |
| 0.5217 | 4650 | 0.0003 | - |
| 0.5228 | 4660 | 0.0011 | - |
| 0.5240 | 4670 | 0.0001 | - |
| 0.5251 | 4680 | 0.0002 | - |
| 0.5262 | 4690 | 0.4625 | - |
| 0.5273 | 4700 | 0.0015 | - |
| 0.5284 | 4710 | 0.0003 | - |
| 0.5296 | 4720 | 0.0006 | - |
| 0.5307 | 4730 | 0.0 | - |
| 0.5318 | 4740 | 0.001 | - |
| 0.5329 | 4750 | 0.0003 | - |
| 0.5341 | 4760 | 0.0004 | - |
| 0.5352 | 4770 | 0.0031 | - |
| 0.5363 | 4780 | 0.3211 | - |
| 0.5374 | 4790 | 0.0009 | - |
| 0.5385 | 4800 | 0.0012 | - |
| 0.5397 | 4810 | 0.0003 | - |
| 0.5408 | 4820 | 0.0008 | - |
| 0.5419 | 4830 | 0.0017 | - |
| 0.5430 | 4840 | 0.0002 | - |
| 0.5441 | 4850 | 0.0004 | - |
| 0.5453 | 4860 | 0.0004 | - |
| 0.5464 | 4870 | 0.0039 | - |
| 0.5475 | 4880 | 0.0023 | - |
| 0.5486 | 4890 | 0.003 | - |
| 0.5498 | 4900 | 0.0004 | - |
| 0.5509 | 4910 | 0.0002 | - |
| 0.5520 | 4920 | 0.0019 | - |
| 0.5531 | 4930 | 0.0003 | - |
| 0.5542 | 4940 | 0.0006 | - |
| 0.5554 | 4950 | 0.0032 | - |
| 0.5565 | 4960 | 0.0005 | - |
| 0.5576 | 4970 | 0.0005 | - |
| 0.5587 | 4980 | 0.0002 | - |
| 0.5599 | 4990 | 0.0005 | - |
| 0.5610 | 5000 | 0.0004 | - |
| 0.5621 | 5010 | 0.0001 | - |
| 0.5632 | 5020 | 0.0021 | - |
| 0.5643 | 5030 | 0.0002 | - |
| 0.5655 | 5040 | 0.0006 | - |
| 0.5666 | 5050 | 0.0009 | - |
| 0.5677 | 5060 | 0.0016 | - |
| 0.5688 | 5070 | 0.0002 | - |
| 0.5700 | 5080 | 0.0002 | - |
| 0.5711 | 5090 | 0.0013 | - |
| 0.5722 | 5100 | 0.0202 | - |
| 0.5733 | 5110 | 0.0087 | - |
| 0.5744 | 5120 | 0.0004 | - |
| 0.5756 | 5130 | 0.0002 | - |
| 0.5767 | 5140 | 0.0002 | - |
| 0.5778 | 5150 | 0.0009 | - |
| 0.5789 | 5160 | 0.0011 | - |
| 0.5801 | 5170 | 0.0005 | - |
| 0.5812 | 5180 | 0.0015 | - |
| 0.5823 | 5190 | 0.0018 | - |
| 0.5834 | 5200 | 0.0013 | - |
| 0.5845 | 5210 | 0.1693 | - |
| 0.5857 | 5220 | 0.0107 | - |
| 0.5868 | 5230 | 0.0005 | - |
| 0.5879 | 5240 | 0.0003 | - |
| 0.5890 | 5250 | 0.0005 | - |
| 0.5901 | 5260 | 0.0002 | - |
| 0.5913 | 5270 | 0.0001 | - |
| 0.5924 | 5280 | 0.0006 | - |
| 0.5935 | 5290 | 0.0022 | - |
| 0.5946 | 5300 | 0.0004 | - |
| 0.5958 | 5310 | 0.0013 | - |
| 0.5969 | 5320 | 0.001 | - |
| 0.5980 | 5330 | 0.0005 | - |
| 0.5991 | 5340 | 0.0026 | - |
| 0.6002 | 5350 | 0.0007 | - |
| 0.6014 | 5360 | 0.0002 | - |
| 0.6025 | 5370 | 0.0052 | - |
| 0.6036 | 5380 | 0.0006 | - |
| 0.6047 | 5390 | 0.0003 | - |
| 0.6059 | 5400 | 0.0012 | - |
| 0.6070 | 5410 | 0.0003 | - |
| 0.6081 | 5420 | 0.0019 | - |
| 0.6092 | 5430 | 0.0003 | - |
| 0.6103 | 5440 | 0.0013 | - |
| 0.6115 | 5450 | 0.0002 | - |
| 0.6126 | 5460 | 0.0019 | - |
| 0.6137 | 5470 | 0.0003 | - |
| 0.6148 | 5480 | 0.0003 | - |
| 0.6160 | 5490 | 0.0011 | - |
| 0.6171 | 5500 | 0.001 | - |
| 0.6182 | 5510 | 0.0068 | - |
| 0.6193 | 5520 | 0.0016 | - |
| 0.6204 | 5530 | 0.0001 | - |
| 0.6216 | 5540 | 0.0019 | - |
| 0.6227 | 5550 | 0.0015 | - |
| 0.6238 | 5560 | 0.0119 | - |
| 0.6249 | 5570 | 0.0009 | - |
| 0.6261 | 5580 | 0.0002 | - |
| 0.6272 | 5590 | 0.0017 | - |
| 0.6283 | 5600 | 0.0001 | - |
| 0.6294 | 5610 | 0.001 | - |
| 0.6305 | 5620 | 0.0005 | - |
| 0.6317 | 5630 | 0.0004 | - |
| 0.6328 | 5640 | 0.0155 | - |
| 0.6339 | 5650 | 0.0028 | - |
| 0.6350 | 5660 | 0.0006 | - |
| 0.6361 | 5670 | 0.0007 | - |
| 0.6373 | 5680 | 0.0025 | - |
| 0.6384 | 5690 | 0.0007 | - |
| 0.6395 | 5700 | 0.0002 | - |
| 0.6406 | 5710 | 0.0011 | - |
| 0.6418 | 5720 | 0.0045 | - |
| 0.6429 | 5730 | 0.0004 | - |
| 0.6440 | 5740 | 0.0005 | - |
| 0.6451 | 5750 | 0.0009 | - |
| 0.6462 | 5760 | 0.0001 | - |
| 0.6474 | 5770 | 0.0005 | - |
| 0.6485 | 5780 | 0.0002 | - |
| 0.6496 | 5790 | 0.0031 | - |
| 0.6507 | 5800 | 0.0016 | - |
| 0.6519 | 5810 | 0.0002 | - |
| 0.6530 | 5820 | 0.0026 | - |
| 0.6541 | 5830 | 0.0004 | - |
| 0.6552 | 5840 | 0.0002 | - |
| 0.6563 | 5850 | 0.0013 | - |
| 0.6575 | 5860 | 0.0012 | - |
| 0.6586 | 5870 | 0.0015 | - |
| 0.6597 | 5880 | 0.0027 | - |
| 0.6608 | 5890 | 0.0045 | - |
| 0.6620 | 5900 | 0.001 | - |
| 0.6631 | 5910 | 0.0035 | - |
| 0.6642 | 5920 | 0.0026 | - |
| 0.6653 | 5930 | 0.0002 | - |
| 0.6664 | 5940 | 0.0003 | - |
| 0.6676 | 5950 | 0.009 | - |
| 0.6687 | 5960 | 0.0012 | - |
| 0.6698 | 5970 | 0.0001 | - |
| 0.6709 | 5980 | 0.0011 | - |
| 0.6721 | 5990 | 0.0021 | - |
| 0.6732 | 6000 | 0.0021 | - |
| 0.6743 | 6010 | 0.0001 | - |
| 0.6754 | 6020 | 0.0008 | - |
| 0.6765 | 6030 | 0.0005 | - |
| 0.6777 | 6040 | 0.0042 | - |
| 0.6788 | 6050 | 0.0033 | - |
| 0.6799 | 6060 | 0.0001 | - |
| 0.6810 | 6070 | 0.0001 | - |
| 0.6821 | 6080 | 0.3232 | - |
| 0.6833 | 6090 | 0.0002 | - |
| 0.6844 | 6100 | 0.0022 | - |
| 0.6855 | 6110 | 0.0003 | - |
| 0.6866 | 6120 | 0.0003 | - |
| 0.6878 | 6130 | 0.0004 | - |
| 0.6889 | 6140 | 0.0004 | - |
| 0.6900 | 6150 | 0.0001 | - |
| 0.6911 | 6160 | 0.0005 | - |
| 0.6922 | 6170 | 0.0002 | - |
| 0.6934 | 6180 | 0.0024 | - |
| 0.6945 | 6190 | 0.0005 | - |
| 0.6956 | 6200 | 0.0018 | - |
| 0.6967 | 6210 | 0.0019 | - |
| 0.6979 | 6220 | 0.0008 | - |
| 0.6990 | 6230 | 0.0004 | - |
| 0.7001 | 6240 | 0.0061 | - |
| 0.7012 | 6250 | 0.0022 | - |
| 0.7023 | 6260 | 0.002 | - |
| 0.7035 | 6270 | 0.0025 | - |
| 0.7046 | 6280 | 0.0007 | - |
| 0.7057 | 6290 | 0.001 | - |
| 0.7068 | 6300 | 0.0005 | - |
| 0.7080 | 6310 | 0.0001 | - |
| 0.7091 | 6320 | 0.0002 | - |
| 0.7102 | 6330 | 0.0017 | - |
| 0.7113 | 6340 | 0.0019 | - |
| 0.7124 | 6350 | 0.0005 | - |
| 0.7136 | 6360 | 0.0011 | - |
| 0.7147 | 6370 | 0.0005 | - |
| 0.7158 | 6380 | 0.0002 | - |
| 0.7169 | 6390 | 0.0001 | - |
| 0.7181 | 6400 | 0.0008 | - |
| 0.7192 | 6410 | 0.0005 | - |
| 0.7203 | 6420 | 0.0004 | - |
| 0.7214 | 6430 | 0.0015 | - |
| 0.7225 | 6440 | 0.0012 | - |
| 0.7237 | 6450 | 0.0002 | - |
| 0.7248 | 6460 | 0.0008 | - |
| 0.7259 | 6470 | 0.0149 | - |
| 0.7270 | 6480 | 0.0022 | - |
| 0.7281 | 6490 | 0.0003 | - |
| 0.7293 | 6500 | 0.0116 | - |
| 0.7304 | 6510 | 0.0003 | - |
| 0.7315 | 6520 | 0.003 | - |
| 0.7326 | 6530 | 0.0009 | - |
| 0.7338 | 6540 | 0.0001 | - |
| 0.7349 | 6550 | 0.0009 | - |
| 0.7360 | 6560 | 0.0001 | - |
| 0.7371 | 6570 | 0.0002 | - |
| 0.7382 | 6580 | 0.0011 | - |
| 0.7394 | 6590 | 0.0052 | - |
| 0.7405 | 6600 | 0.0006 | - |
| 0.7416 | 6610 | 0.3256 | - |
| 0.7427 | 6620 | 0.0015 | - |
| 0.7439 | 6630 | 0.0006 | - |
| 0.7450 | 6640 | 0.0009 | - |
| 0.7461 | 6650 | 0.0004 | - |
| 0.7472 | 6660 | 0.0002 | - |
| 0.7483 | 6670 | 0.0002 | - |
| 0.7495 | 6680 | 0.0004 | - |
| 0.7506 | 6690 | 0.0016 | - |
| 0.7517 | 6700 | 0.0014 | - |
| 0.7528 | 6710 | 0.0005 | - |
| 0.7540 | 6720 | 0.0009 | - |
| 0.7551 | 6730 | 0.0017 | - |
| 0.7562 | 6740 | 0.0014 | - |
| 0.7573 | 6750 | 0.0022 | - |
| 0.7584 | 6760 | 0.002 | - |
| 0.7596 | 6770 | 0.0045 | - |
| 0.7607 | 6780 | 0.0002 | - |
| 0.7618 | 6790 | 0.0006 | - |
| 0.7629 | 6800 | 0.0004 | - |
| 0.7641 | 6810 | 0.0001 | - |
| 0.7652 | 6820 | 0.0005 | - |
| 0.7663 | 6830 | 0.0003 | - |
| 0.7674 | 6840 | 0.0002 | - |
| 0.7685 | 6850 | 0.0003 | - |
| 0.7697 | 6860 | 0.0006 | - |
| 0.7708 | 6870 | 0.0031 | - |
| 0.7719 | 6880 | 0.0009 | - |
| 0.7730 | 6890 | 0.0026 | - |
| 0.7742 | 6900 | 0.0011 | - |
| 0.7753 | 6910 | 0.0005 | - |
| 0.7764 | 6920 | 0.0003 | - |
| 0.7775 | 6930 | 0.004 | - |
| 0.7786 | 6940 | 0.0018 | - |
| 0.7798 | 6950 | 0.0007 | - |
| 0.7809 | 6960 | 0.0017 | - |
| 0.7820 | 6970 | 0.0003 | - |
| 0.7831 | 6980 | 0.0006 | - |
| 0.7842 | 6990 | 0.0001 | - |
| 0.7854 | 7000 | 0.0009 | - |
| 0.7865 | 7010 | 0.0025 | - |
| 0.7876 | 7020 | 0.0015 | - |
| 0.7887 | 7030 | 0.0015 | - |
| 0.7899 | 7040 | 0.0002 | - |
| 0.7910 | 7050 | 0.0005 | - |
| 0.7921 | 7060 | 0.0004 | - |
| 0.7932 | 7070 | 0.0033 | - |
| 0.7943 | 7080 | 0.0004 | - |
| 0.7955 | 7090 | 0.0007 | - |
| 0.7966 | 7100 | 0.0004 | - |
| 0.7977 | 7110 | 0.0013 | - |
| 0.7988 | 7120 | 0.0004 | - |
| 0.8000 | 7130 | 0.0062 | - |
| 0.8011 | 7140 | 0.0005 | - |
| 0.8022 | 7150 | 0.0057 | - |
| 0.8033 | 7160 | 0.0016 | - |
| 0.8044 | 7170 | 0.0001 | - |
| 0.8056 | 7180 | 0.0 | - |
| 0.8067 | 7190 | 0.0004 | - |
| 0.8078 | 7200 | 0.0034 | - |
| 0.8089 | 7210 | 0.0014 | - |
| 0.8101 | 7220 | 0.001 | - |
| 0.8112 | 7230 | 0.0004 | - |
| 0.8123 | 7240 | 0.0002 | - |
| 0.8134 | 7250 | 0.0001 | - |
| 0.8145 | 7260 | 0.0006 | - |
| 0.8157 | 7270 | 0.0013 | - |
| 0.8168 | 7280 | 0.0002 | - |
| 0.8179 | 7290 | 0.0009 | - |
| 0.8190 | 7300 | 0.0028 | - |
| 0.8202 | 7310 | 0.0004 | - |
| 0.8213 | 7320 | 0.0016 | - |
| 0.8224 | 7330 | 0.0005 | - |
| 0.8235 | 7340 | 0.0003 | - |
| 0.8246 | 7350 | 0.0001 | - |
| 0.8258 | 7360 | 0.0031 | - |
| 0.8269 | 7370 | 0.0013 | - |
| 0.8280 | 7380 | 0.0008 | - |
| 0.8291 | 7390 | 0.0002 | - |
| 0.8302 | 7400 | 0.0011 | - |
| 0.8314 | 7410 | 0.0035 | - |
| 0.8325 | 7420 | 0.0002 | - |
| 0.8336 | 7430 | 0.0002 | - |
| 0.8347 | 7440 | 0.0002 | - |
| 0.8359 | 7450 | 0.0018 | - |
| 0.8370 | 7460 | 0.0116 | - |
| 0.8381 | 7470 | 0.0004 | - |
| 0.8392 | 7480 | 0.0005 | - |
| 0.8403 | 7490 | 0.0006 | - |
| 0.8415 | 7500 | 0.0004 | - |
| 0.8426 | 7510 | 0.0011 | - |
| 0.8437 | 7520 | 0.0003 | - |
| 0.8448 | 7530 | 0.0017 | - |
| 0.8460 | 7540 | 0.0006 | - |
| 0.8471 | 7550 | 0.0001 | - |
| 0.8482 | 7560 | 0.0012 | - |
| 0.8493 | 7570 | 0.0011 | - |
| 0.8504 | 7580 | 0.0003 | - |
| 0.8516 | 7590 | 0.0005 | - |
| 0.8527 | 7600 | 0.0002 | - |
| 0.8538 | 7610 | 0.0014 | - |
| 0.8549 | 7620 | 0.0001 | - |
| 0.8561 | 7630 | 0.002 | - |
| 0.8572 | 7640 | 0.003 | - |
| 0.8583 | 7650 | 0.0 | - |
| 0.8594 | 7660 | 0.0011 | - |
| 0.8605 | 7670 | 0.0018 | - |
| 0.8617 | 7680 | 0.0039 | - |
| 0.8628 | 7690 | 0.0036 | - |
| 0.8639 | 7700 | 0.0015 | - |
| 0.8650 | 7710 | 0.0011 | - |
| 0.8662 | 7720 | 0.0015 | - |
| 0.8673 | 7730 | 0.0007 | - |
| 0.8684 | 7740 | 0.3175 | - |
| 0.8695 | 7750 | 0.0001 | - |
| 0.8706 | 7760 | 0.0003 | - |
| 0.8718 | 7770 | 0.0002 | - |
| 0.8729 | 7780 | 0.0003 | - |
| 0.8740 | 7790 | 0.0029 | - |
| 0.8751 | 7800 | 0.0005 | - |
| 0.8762 | 7810 | 0.0004 | - |
| 0.8774 | 7820 | 0.0004 | - |
| 0.8785 | 7830 | 0.0002 | - |
| 0.8796 | 7840 | 0.0003 | - |
| 0.8807 | 7850 | 0.0002 | - |
| 0.8819 | 7860 | 0.0006 | - |
| 0.8830 | 7870 | 0.0006 | - |
| 0.8841 | 7880 | 0.0003 | - |
| 0.8852 | 7890 | 0.0009 | - |
| 0.8863 | 7900 | 0.0003 | - |
| 0.8875 | 7910 | 0.0003 | - |
| 0.8886 | 7920 | 0.0012 | - |
| 0.8897 | 7930 | 0.0009 | - |
| 0.8908 | 7940 | 0.0025 | - |
| 0.8920 | 7950 | 0.0002 | - |
| 0.8931 | 7960 | 0.0007 | - |
| 0.8942 | 7970 | 0.0005 | - |
| 0.8953 | 7980 | 0.0004 | - |
| 0.8964 | 7990 | 0.0001 | - |
| 0.8976 | 8000 | 0.0001 | - |
| 0.8987 | 8010 | 0.0004 | - |
| 0.8998 | 8020 | 0.0001 | - |
| 0.9009 | 8030 | 0.0011 | - |
| 0.9021 | 8040 | 0.0007 | - |
| 0.9032 | 8050 | 0.0003 | - |
| 0.9043 | 8060 | 0.0001 | - |
| 0.9054 | 8070 | 0.0017 | - |
| 0.9065 | 8080 | 0.0008 | - |
| 0.9077 | 8090 | 0.0022 | - |
| 0.9088 | 8100 | 0.0002 | - |
| 0.9099 | 8110 | 0.0008 | - |
| 0.9110 | 8120 | 0.0002 | - |
| 0.9122 | 8130 | 0.0007 | - |
| 0.9133 | 8140 | 0.0001 | - |
| 0.9144 | 8150 | 0.0008 | - |
| 0.9155 | 8160 | 0.0003 | - |
| 0.9166 | 8170 | 0.001 | - |
| 0.9178 | 8180 | 0.0005 | - |
| 0.9189 | 8190 | 0.0006 | - |
| 0.9200 | 8200 | 0.0003 | - |
| 0.9211 | 8210 | 0.0026 | - |
| 0.9222 | 8220 | 0.0014 | - |
| 0.9234 | 8230 | 0.0001 | - |
| 0.9245 | 8240 | 0.0018 | - |
| 0.9256 | 8250 | 0.0002 | - |
| 0.9267 | 8260 | 0.0003 | - |
| 0.9279 | 8270 | 0.0002 | - |
| 0.9290 | 8280 | 0.0011 | - |
| 0.9301 | 8290 | 0.0002 | - |
| 0.9312 | 8300 | 0.0004 | - |
| 0.9323 | 8310 | 0.0004 | - |
| 0.9335 | 8320 | 0.0007 | - |
| 0.9346 | 8330 | 0.0016 | - |
| 0.9357 | 8340 | 0.0006 | - |
| 0.9368 | 8350 | 0.0004 | - |
| 0.9380 | 8360 | 0.0008 | - |
| 0.9391 | 8370 | 0.0017 | - |
| 0.9402 | 8380 | 0.0004 | - |
| 0.9413 | 8390 | 0.0004 | - |
| 0.9424 | 8400 | 0.0001 | - |
| 0.9436 | 8410 | 0.0019 | - |
| 0.9447 | 8420 | 0.0019 | - |
| 0.9458 | 8430 | 0.0003 | - |
| 0.9469 | 8440 | 0.0005 | - |
| 0.9481 | 8450 | 0.0003 | - |
| 0.9492 | 8460 | 0.0002 | - |
| 0.9503 | 8470 | 0.3168 | - |
| 0.9514 | 8480 | 0.0008 | - |
| 0.9525 | 8490 | 0.0003 | - |
| 0.9537 | 8500 | 0.0001 | - |
| 0.9548 | 8510 | 0.0009 | - |
| 0.9559 | 8520 | 0.0001 | - |
| 0.9570 | 8530 | 0.0002 | - |
| 0.9582 | 8540 | 0.0007 | - |
| 0.9593 | 8550 | 0.0004 | - |
| 0.9604 | 8560 | 0.0005 | - |
| 0.9615 | 8570 | 0.0 | - |
| 0.9626 | 8580 | 0.0001 | - |
| 0.9638 | 8590 | 0.0009 | - |
| 0.9649 | 8600 | 0.0005 | - |
| 0.9660 | 8610 | 0.0015 | - |
| 0.9671 | 8620 | 0.0001 | - |
| 0.9682 | 8630 | 0.0009 | - |
| 0.9694 | 8640 | 0.0011 | - |
| 0.9705 | 8650 | 0.0006 | - |
| 0.9716 | 8660 | 0.0003 | - |
| 0.9727 | 8670 | 0.0001 | - |
| 0.9739 | 8680 | 0.0097 | - |
| 0.9750 | 8690 | 0.0007 | - |
| 0.9761 | 8700 | 0.0004 | - |
| 0.9772 | 8710 | 0.0015 | - |
| 0.9783 | 8720 | 0.0002 | - |
| 0.9795 | 8730 | 0.0007 | - |
| 0.9806 | 8740 | 0.0009 | - |
| 0.9817 | 8750 | 0.0002 | - |
| 0.9828 | 8760 | 0.0003 | - |
| 0.9840 | 8770 | 0.0001 | - |
| 0.9851 | 8780 | 0.0022 | - |
| 0.9862 | 8790 | 0.0001 | - |
| 0.9873 | 8800 | 0.0027 | - |
| 0.9884 | 8810 | 0.0021 | - |
| 0.9896 | 8820 | 0.0001 | - |
| 0.9907 | 8830 | 0.0008 | - |
| 0.9918 | 8840 | 0.0001 | - |
| 0.9929 | 8850 | 0.0001 | - |
| 0.9941 | 8860 | 0.0002 | - |
| 0.9952 | 8870 | 0.0004 | - |
| 0.9963 | 8880 | 0.0008 | - |
| 0.9974 | 8890 | 0.0006 | - |
| 0.9985 | 8900 | 0.0002 | - |
| 0.9997 | 8910 | 0.0006 | - |
| **1.0** | **8913** | **-** | **0.0244** |
| 1.0008 | 8920 | 0.0012 | - |
| 1.0019 | 8930 | 0.0002 | - |
| 1.0030 | 8940 | 0.0004 | - |
| 1.0042 | 8950 | 0.0004 | - |
| 1.0053 | 8960 | 0.0001 | - |
| 1.0064 | 8970 | 0.0004 | - |
| 1.0075 | 8980 | 0.0003 | - |
| 1.0086 | 8990 | 0.0002 | - |
| 1.0098 | 9000 | 0.0004 | - |
| 1.0109 | 9010 | 0.0003 | - |
| 1.0120 | 9020 | 0.0001 | - |
| 1.0131 | 9030 | 0.001 | - |
| 1.0142 | 9040 | 0.0002 | - |
| 1.0154 | 9050 | 0.0003 | - |
| 1.0165 | 9060 | 0.0001 | - |
| 1.0176 | 9070 | 0.0002 | - |
| 1.0187 | 9080 | 0.0 | - |
| 1.0199 | 9090 | 0.0001 | - |
| 1.0210 | 9100 | 0.0009 | - |
| 1.0221 | 9110 | 0.0004 | - |
| 1.0232 | 9120 | 0.0016 | - |
| 1.0243 | 9130 | 0.0026 | - |
| 1.0255 | 9140 | 0.0002 | - |
| 1.0266 | 9150 | 0.0007 | - |
| 1.0277 | 9160 | 0.0001 | - |
| 1.0288 | 9170 | 0.0002 | - |
| 1.0300 | 9180 | 0.0 | - |
| 1.0311 | 9190 | 0.0002 | - |
| 1.0322 | 9200 | 0.0003 | - |
| 1.0333 | 9210 | 0.0004 | - |
| 1.0344 | 9220 | 0.0009 | - |
| 1.0356 | 9230 | 0.0001 | - |
| 1.0367 | 9240 | 0.0024 | - |
| 1.0378 | 9250 | 0.0023 | - |
| 1.0389 | 9260 | 0.0002 | - |
| 1.0401 | 9270 | 0.0003 | - |
| 1.0412 | 9280 | 0.0001 | - |
| 1.0423 | 9290 | 0.0003 | - |
| 1.0434 | 9300 | 0.0 | - |
| 1.0445 | 9310 | 0.0003 | - |
| 1.0457 | 9320 | 0.0001 | - |
| 1.0468 | 9330 | 0.0001 | - |
| 1.0479 | 9340 | 0.0004 | - |
| 1.0490 | 9350 | 0.0028 | - |
| 1.0502 | 9360 | 0.0003 | - |
| 1.0513 | 9370 | 0.0007 | - |
| 1.0524 | 9380 | 0.0001 | - |
| 1.0535 | 9390 | 0.0001 | - |
| 1.0546 | 9400 | 0.0001 | - |
| 1.0558 | 9410 | 0.0002 | - |
| 1.0569 | 9420 | 0.0 | - |
| 1.0580 | 9430 | 0.0002 | - |
| 1.0591 | 9440 | 0.0001 | - |
| 1.0602 | 9450 | 0.0076 | - |
| 1.0614 | 9460 | 0.0 | - |
| 1.0625 | 9470 | 0.0007 | - |
| 1.0636 | 9480 | 0.0002 | - |
| 1.0647 | 9490 | 0.0001 | - |
| 1.0659 | 9500 | 0.0005 | - |
| 1.0670 | 9510 | 0.0032 | - |
| 1.0681 | 9520 | 0.0008 | - |
| 1.0692 | 9530 | 0.0001 | - |
| 1.0703 | 9540 | 0.0003 | - |
| 1.0715 | 9550 | 0.0001 | - |
| 1.0726 | 9560 | 0.0002 | - |
| 1.0737 | 9570 | 0.0005 | - |
| 1.0748 | 9580 | 0.0002 | - |
| 1.0760 | 9590 | 0.0001 | - |
| 1.0771 | 9600 | 0.0003 | - |
| 1.0782 | 9610 | 0.0001 | - |
| 1.0793 | 9620 | 0.0007 | - |
| 1.0804 | 9630 | 0.0001 | - |
| 1.0816 | 9640 | 0.0001 | - |
| 1.0827 | 9650 | 0.0016 | - |
| 1.0838 | 9660 | 0.0012 | - |
| 1.0849 | 9670 | 0.0001 | - |
| 1.0861 | 9680 | 0.0002 | - |
| 1.0872 | 9690 | 0.0002 | - |
| 1.0883 | 9700 | 0.0001 | - |
| 1.0894 | 9710 | 0.0002 | - |
| 1.0905 | 9720 | 0.0001 | - |
| 1.0917 | 9730 | 0.0001 | - |
| 1.0928 | 9740 | 0.0003 | - |
| 1.0939 | 9750 | 0.0004 | - |
| 1.0950 | 9760 | 0.0002 | - |
| 1.0962 | 9770 | 0.0001 | - |
| 1.0973 | 9780 | 0.0001 | - |
| 1.0984 | 9790 | 0.0003 | - |
| 1.0995 | 9800 | 0.0008 | - |
| 1.1006 | 9810 | 0.0016 | - |
| 1.1018 | 9820 | 0.0001 | - |
| 1.1029 | 9830 | 0.0029 | - |
| 1.1040 | 9840 | 0.0005 | - |
| 1.1051 | 9850 | 0.0006 | - |
| 1.1062 | 9860 | 0.0018 | - |
| 1.1074 | 9870 | 0.0005 | - |
| 1.1085 | 9880 | 0.0004 | - |
| 1.1096 | 9890 | 0.0001 | - |
| 1.1107 | 9900 | 0.0002 | - |
| 1.1119 | 9910 | 0.0003 | - |
| 1.1130 | 9920 | 0.0007 | - |
| 1.1141 | 9930 | 0.0006 | - |
| 1.1152 | 9940 | 0.0004 | - |
| 1.1163 | 9950 | 0.0001 | - |
| 1.1175 | 9960 | 0.0001 | - |
| 1.1186 | 9970 | 0.0008 | - |
| 1.1197 | 9980 | 0.0 | - |
| 1.1208 | 9990 | 0.0004 | - |
| 1.1220 | 10000 | 0.0002 | - |
| 1.1231 | 10010 | 0.0002 | - |
| 1.1242 | 10020 | 0.0 | - |
| 1.1253 | 10030 | 0.0003 | - |
| 1.1264 | 10040 | 0.0008 | - |
| 1.1276 | 10050 | 0.0005 | - |
| 1.1287 | 10060 | 0.0069 | - |
| 1.1298 | 10070 | 0.0004 | - |
| 1.1309 | 10080 | 0.0038 | - |
| 1.1321 | 10090 | 0.0004 | - |
| 1.1332 | 10100 | 0.0001 | - |
| 1.1343 | 10110 | 0.0001 | - |
| 1.1354 | 10120 | 0.0001 | - |
| 1.1365 | 10130 | 0.0027 | - |
| 1.1377 | 10140 | 0.0009 | - |
| 1.1388 | 10150 | 0.0002 | - |
| 1.1399 | 10160 | 0.0004 | - |
| 1.1410 | 10170 | 0.0005 | - |
| 1.1422 | 10180 | 0.0012 | - |
| 1.1433 | 10190 | 0.0002 | - |
| 1.1444 | 10200 | 0.0003 | - |
| 1.1455 | 10210 | 0.0074 | - |
| 1.1466 | 10220 | 0.0001 | - |
| 1.1478 | 10230 | 0.001 | - |
| 1.1489 | 10240 | 0.0001 | - |
| 1.1500 | 10250 | 0.0002 | - |
| 1.1511 | 10260 | 0.0001 | - |
| 1.1522 | 10270 | 0.0001 | - |
| 1.1534 | 10280 | 0.0003 | - |
| 1.1545 | 10290 | 0.0001 | - |
| 1.1556 | 10300 | 0.0001 | - |
| 1.1567 | 10310 | 0.0182 | - |
| 1.1579 | 10320 | 0.0002 | - |
| 1.1590 | 10330 | 0.0023 | - |
| 1.1601 | 10340 | 0.0002 | - |
| 1.1612 | 10350 | 0.0005 | - |
| 1.1623 | 10360 | 0.0005 | - |
| 1.1635 | 10370 | 0.0001 | - |
| 1.1646 | 10380 | 0.0014 | - |
| 1.1657 | 10390 | 0.0001 | - |
| 1.1668 | 10400 | 0.0014 | - |
| 1.1680 | 10410 | 0.0001 | - |
| 1.1691 | 10420 | 0.0004 | - |
| 1.1702 | 10430 | 0.0001 | - |
| 1.1713 | 10440 | 0.0008 | - |
| 1.1724 | 10450 | 0.3055 | - |
| 1.1736 | 10460 | 0.0 | - |
| 1.1747 | 10470 | 0.0001 | - |
| 1.1758 | 10480 | 0.0007 | - |
| 1.1769 | 10490 | 0.0001 | - |
| 1.1781 | 10500 | 0.0001 | - |
| 1.1792 | 10510 | 0.0001 | - |
| 1.1803 | 10520 | 0.0 | - |
| 1.1814 | 10530 | 0.0001 | - |
| 1.1825 | 10540 | 0.0 | - |
| 1.1837 | 10550 | 0.0009 | - |
| 1.1848 | 10560 | 0.0001 | - |
| 1.1859 | 10570 | 0.0002 | - |
| 1.1870 | 10580 | 0.0001 | - |
| 1.1882 | 10590 | 0.0003 | - |
| 1.1893 | 10600 | 0.0002 | - |
| 1.1904 | 10610 | 0.0004 | - |
| 1.1915 | 10620 | 0.0002 | - |
| 1.1926 | 10630 | 0.0001 | - |
| 1.1938 | 10640 | 0.0002 | - |
| 1.1949 | 10650 | 0.0006 | - |
| 1.1960 | 10660 | 0.0032 | - |
| 1.1971 | 10670 | 0.0002 | - |
| 1.1982 | 10680 | 0.0004 | - |
| 1.1994 | 10690 | 0.0001 | - |
| 1.2005 | 10700 | 0.0004 | - |
| 1.2016 | 10710 | 0.0001 | - |
| 1.2027 | 10720 | 0.0001 | - |
| 1.2039 | 10730 | 0.0002 | - |
| 1.2050 | 10740 | 0.0007 | - |
| 1.2061 | 10750 | 0.0031 | - |
| 1.2072 | 10760 | 0.0002 | - |
| 1.2083 | 10770 | 0.0002 | - |
| 1.2095 | 10780 | 0.0001 | - |
| 1.2106 | 10790 | 0.0007 | - |
| 1.2117 | 10800 | 0.0001 | - |
| 1.2128 | 10810 | 0.0005 | - |
| 1.2140 | 10820 | 0.0 | - |
| 1.2151 | 10830 | 0.0011 | - |
| 1.2162 | 10840 | 0.0001 | - |
| 1.2173 | 10850 | 0.0004 | - |
| 1.2184 | 10860 | 0.0002 | - |
| 1.2196 | 10870 | 0.0001 | - |
| 1.2207 | 10880 | 0.0001 | - |
| 1.2218 | 10890 | 0.0005 | - |
| 1.2229 | 10900 | 0.0001 | - |
| 1.2241 | 10910 | 0.0001 | - |
| 1.2252 | 10920 | 0.001 | - |
| 1.2263 | 10930 | 0.0003 | - |
| 1.2274 | 10940 | 0.0004 | - |
| 1.2285 | 10950 | 0.0005 | - |
| 1.2297 | 10960 | 0.0001 | - |
| 1.2308 | 10970 | 0.0003 | - |
| 1.2319 | 10980 | 0.0001 | - |
| 1.2330 | 10990 | 0.0001 | - |
| 1.2342 | 11000 | 0.0001 | - |
| 1.2353 | 11010 | 0.0001 | - |
| 1.2364 | 11020 | 0.0005 | - |
| 1.2375 | 11030 | 0.0001 | - |
| 1.2386 | 11040 | 0.0001 | - |
| 1.2398 | 11050 | 0.0002 | - |
| 1.2409 | 11060 | 0.0002 | - |
| 1.2420 | 11070 | 0.0005 | - |
| 1.2431 | 11080 | 0.0001 | - |
| 1.2442 | 11090 | 0.0004 | - |
| 1.2454 | 11100 | 0.0001 | - |
| 1.2465 | 11110 | 0.0004 | - |
| 1.2476 | 11120 | 0.0007 | - |
| 1.2487 | 11130 | 0.0008 | - |
| 1.2499 | 11140 | 0.0001 | - |
| 1.2510 | 11150 | 0.0008 | - |
| 1.2521 | 11160 | 0.0002 | - |
| 1.2532 | 11170 | 0.0002 | - |
| 1.2543 | 11180 | 0.0003 | - |
| 1.2555 | 11190 | 0.0009 | - |
| 1.2566 | 11200 | 0.0008 | - |
| 1.2577 | 11210 | 0.0007 | - |
| 1.2588 | 11220 | 0.0002 | - |
| 1.2600 | 11230 | 0.0 | - |
| 1.2611 | 11240 | 0.3249 | - |
| 1.2622 | 11250 | 0.0 | - |
| 1.2633 | 11260 | 0.0001 | - |
| 1.2644 | 11270 | 0.0002 | - |
| 1.2656 | 11280 | 0.0019 | - |
| 1.2667 | 11290 | 0.0001 | - |
| 1.2678 | 11300 | 0.0 | - |
| 1.2689 | 11310 | 0.0 | - |
| 1.2701 | 11320 | 0.0001 | - |
| 1.2712 | 11330 | 0.0011 | - |
| 1.2723 | 11340 | 0.0001 | - |
| 1.2734 | 11350 | 0.0001 | - |
| 1.2745 | 11360 | 0.0002 | - |
| 1.2757 | 11370 | 0.0001 | - |
| 1.2768 | 11380 | 0.0002 | - |
| 1.2779 | 11390 | 0.0004 | - |
| 1.2790 | 11400 | 0.0 | - |
| 1.2802 | 11410 | 0.0 | - |
| 1.2813 | 11420 | 0.0001 | - |
| 1.2824 | 11430 | 0.0004 | - |
| 1.2835 | 11440 | 0.0001 | - |
| 1.2846 | 11450 | 0.0006 | - |
| 1.2858 | 11460 | 0.0002 | - |
| 1.2869 | 11470 | 0.0004 | - |
| 1.2880 | 11480 | 0.0001 | - |
| 1.2891 | 11490 | 0.0 | - |
| 1.2903 | 11500 | 0.0001 | - |
| 1.2914 | 11510 | 0.0003 | - |
| 1.2925 | 11520 | 0.0001 | - |
| 1.2936 | 11530 | 0.0002 | - |
| 1.2947 | 11540 | 0.0003 | - |
| 1.2959 | 11550 | 0.0001 | - |
| 1.2970 | 11560 | 0.0001 | - |
| 1.2981 | 11570 | 0.0001 | - |
| 1.2992 | 11580 | 0.0003 | - |
| 1.3003 | 11590 | 0.0 | - |
| 1.3015 | 11600 | 0.0002 | - |
| 1.3026 | 11610 | 0.0001 | - |
| 1.3037 | 11620 | 0.0012 | - |
| 1.3048 | 11630 | 0.0002 | - |
| 1.3060 | 11640 | 0.0001 | - |
| 1.3071 | 11650 | 0.0001 | - |
| 1.3082 | 11660 | 0.0002 | - |
| 1.3093 | 11670 | 0.0001 | - |
| 1.3104 | 11680 | 0.0012 | - |
| 1.3116 | 11690 | 0.0002 | - |
| 1.3127 | 11700 | 0.0 | - |
| 1.3138 | 11710 | 0.0001 | - |
| 1.3149 | 11720 | 0.0002 | - |
| 1.3161 | 11730 | 0.0008 | - |
| 1.3172 | 11740 | 0.0001 | - |
| 1.3183 | 11750 | 0.0003 | - |
| 1.3194 | 11760 | 0.0001 | - |
| 1.3205 | 11770 | 0.0001 | - |
| 1.3217 | 11780 | 0.0004 | - |
| 1.3228 | 11790 | 0.0002 | - |
| 1.3239 | 11800 | 0.0005 | - |
| 1.3250 | 11810 | 0.0001 | - |
| 1.3262 | 11820 | 0.0001 | - |
| 1.3273 | 11830 | 0.0001 | - |
| 1.3284 | 11840 | 0.0001 | - |
| 1.3295 | 11850 | 0.0001 | - |
| 1.3306 | 11860 | 0.0001 | - |
| 1.3318 | 11870 | 0.0001 | - |
| 1.3329 | 11880 | 0.0002 | - |
| 1.3340 | 11890 | 0.0001 | - |
| 1.3351 | 11900 | 0.0 | - |
| 1.3363 | 11910 | 0.0006 | - |
| 1.3374 | 11920 | 0.0001 | - |
| 1.3385 | 11930 | 0.0001 | - |
| 1.3396 | 11940 | 0.0002 | - |
| 1.3407 | 11950 | 0.0003 | - |
| 1.3419 | 11960 | 0.0002 | - |
| 1.3430 | 11970 | 0.0001 | - |
| 1.3441 | 11980 | 0.0002 | - |
| 1.3452 | 11990 | 0.0011 | - |
| 1.3463 | 12000 | 0.0008 | - |
| 1.3475 | 12010 | 0.0 | - |
| 1.3486 | 12020 | 0.0003 | - |
| 1.3497 | 12030 | 0.0001 | - |
| 1.3508 | 12040 | 0.0003 | - |
| 1.3520 | 12050 | 0.0005 | - |
| 1.3531 | 12060 | 0.0001 | - |
| 1.3542 | 12070 | 0.0001 | - |
| 1.3553 | 12080 | 0.0001 | - |
| 1.3564 | 12090 | 0.0042 | - |
| 1.3576 | 12100 | 0.0008 | - |
| 1.3587 | 12110 | 0.0001 | - |
| 1.3598 | 12120 | 0.0001 | - |
| 1.3609 | 12130 | 0.0011 | - |
| 1.3621 | 12140 | 0.0002 | - |
| 1.3632 | 12150 | 0.0001 | - |
| 1.3643 | 12160 | 0.0015 | - |
| 1.3654 | 12170 | 0.0004 | - |
| 1.3665 | 12180 | 0.0001 | - |
| 1.3677 | 12190 | 0.0001 | - |
| 1.3688 | 12200 | 0.0001 | - |
| 1.3699 | 12210 | 0.0001 | - |
| 1.3710 | 12220 | 0.0 | - |
| 1.3722 | 12230 | 0.0001 | - |
| 1.3733 | 12240 | 0.0005 | - |
| 1.3744 | 12250 | 0.0002 | - |
| 1.3755 | 12260 | 0.0001 | - |
| 1.3766 | 12270 | 0.0003 | - |
| 1.3778 | 12280 | 0.0006 | - |
| 1.3789 | 12290 | 0.0002 | - |
| 1.3800 | 12300 | 0.0002 | - |
| 1.3811 | 12310 | 0.0002 | - |
| 1.3823 | 12320 | 0.0001 | - |
| 1.3834 | 12330 | 0.0001 | - |
| 1.3845 | 12340 | 0.0 | - |
| 1.3856 | 12350 | 0.0003 | - |
| 1.3867 | 12360 | 0.0001 | - |
| 1.3879 | 12370 | 0.0001 | - |
| 1.3890 | 12380 | 0.0001 | - |
| 1.3901 | 12390 | 0.0011 | - |
| 1.3912 | 12400 | 0.0001 | - |
| 1.3923 | 12410 | 0.0001 | - |
| 1.3935 | 12420 | 0.0001 | - |
| 1.3946 | 12430 | 0.0001 | - |
| 1.3957 | 12440 | 0.0001 | - |
| 1.3968 | 12450 | 0.0001 | - |
| 1.3980 | 12460 | 0.0001 | - |
| 1.3991 | 12470 | 0.0002 | - |
| 1.4002 | 12480 | 0.0002 | - |
| 1.4013 | 12490 | 0.0002 | - |
| 1.4024 | 12500 | 0.0001 | - |
| 1.4036 | 12510 | 0.0027 | - |
| 1.4047 | 12520 | 0.0001 | - |
| 1.4058 | 12530 | 0.0001 | - |
| 1.4069 | 12540 | 0.0003 | - |
| 1.4081 | 12550 | 0.0025 | - |
| 1.4092 | 12560 | 0.0001 | - |
| 1.4103 | 12570 | 0.0005 | - |
| 1.4114 | 12580 | 0.0002 | - |
| 1.4125 | 12590 | 0.0007 | - |
| 1.4137 | 12600 | 0.0001 | - |
| 1.4148 | 12610 | 0.0001 | - |
| 1.4159 | 12620 | 0.0001 | - |
| 1.4170 | 12630 | 0.0001 | - |
| 1.4182 | 12640 | 0.0011 | - |
| 1.4193 | 12650 | 0.0026 | - |
| 1.4204 | 12660 | 0.0002 | - |
| 1.4215 | 12670 | 0.0003 | - |
| 1.4226 | 12680 | 0.0 | - |
| 1.4238 | 12690 | 0.0001 | - |
| 1.4249 | 12700 | 0.0004 | - |
| 1.4260 | 12710 | 0.0002 | - |
| 1.4271 | 12720 | 0.0001 | - |
| 1.4283 | 12730 | 0.0002 | - |
| 1.4294 | 12740 | 0.0003 | - |
| 1.4305 | 12750 | 0.0012 | - |
| 1.4316 | 12760 | 0.0011 | - |
| 1.4327 | 12770 | 0.0006 | - |
| 1.4339 | 12780 | 0.0001 | - |
| 1.4350 | 12790 | 0.0 | - |
| 1.4361 | 12800 | 0.0001 | - |
| 1.4372 | 12810 | 0.0002 | - |
| 1.4383 | 12820 | 0.0003 | - |
| 1.4395 | 12830 | 0.0001 | - |
| 1.4406 | 12840 | 0.0001 | - |
| 1.4417 | 12850 | 0.0 | - |
| 1.4428 | 12860 | 0.0002 | - |
| 1.4440 | 12870 | 0.0 | - |
| 1.4451 | 12880 | 0.0004 | - |
| 1.4462 | 12890 | 0.0001 | - |
| 1.4473 | 12900 | 0.0015 | - |
| 1.4484 | 12910 | 0.0004 | - |
| 1.4496 | 12920 | 0.0001 | - |
| 1.4507 | 12930 | 0.0001 | - |
| 1.4518 | 12940 | 0.0001 | - |
| 1.4529 | 12950 | 0.0 | - |
| 1.4541 | 12960 | 0.0001 | - |
| 1.4552 | 12970 | 0.0002 | - |
| 1.4563 | 12980 | 0.0001 | - |
| 1.4574 | 12990 | 0.0001 | - |
| 1.4585 | 13000 | 0.0001 | - |
| 1.4597 | 13010 | 0.0007 | - |
| 1.4608 | 13020 | 0.0002 | - |
| 1.4619 | 13030 | 0.0003 | - |
| 1.4630 | 13040 | 0.0003 | - |
| 1.4642 | 13050 | 0.0001 | - |
| 1.4653 | 13060 | 0.0006 | - |
| 1.4664 | 13070 | 0.0025 | - |
| 1.4675 | 13080 | 0.0002 | - |
| 1.4686 | 13090 | 0.0005 | - |
| 1.4698 | 13100 | 0.0001 | - |
| 1.4709 | 13110 | 0.0 | - |
| 1.4720 | 13120 | 0.0002 | - |
| 1.4731 | 13130 | 0.0 | - |
| 1.4743 | 13140 | 0.0001 | - |
| 1.4754 | 13150 | 0.0001 | - |
| 1.4765 | 13160 | 0.0002 | - |
| 1.4776 | 13170 | 0.0 | - |
| 1.4787 | 13180 | 0.0001 | - |
| 1.4799 | 13190 | 0.0004 | - |
| 1.4810 | 13200 | 0.0 | - |
| 1.4821 | 13210 | 0.0008 | - |
| 1.4832 | 13220 | 0.0001 | - |
| 1.4843 | 13230 | 0.0001 | - |
| 1.4855 | 13240 | 0.0002 | - |
| 1.4866 | 13250 | 0.0018 | - |
| 1.4877 | 13260 | 0.0001 | - |
| 1.4888 | 13270 | 0.0003 | - |
| 1.4900 | 13280 | 0.0198 | - |
| 1.4911 | 13290 | 0.0007 | - |
| 1.4922 | 13300 | 0.0001 | - |
| 1.4933 | 13310 | 0.0008 | - |
| 1.4944 | 13320 | 0.0002 | - |
| 1.4956 | 13330 | 0.0006 | - |
| 1.4967 | 13340 | 0.0001 | - |
| 1.4978 | 13350 | 0.0001 | - |
| 1.4989 | 13360 | 0.0004 | - |
| 1.5001 | 13370 | 0.0001 | - |
| 1.5012 | 13380 | 0.0007 | - |
| 1.5023 | 13390 | 0.0001 | - |
| 1.5034 | 13400 | 0.0001 | - |
| 1.5045 | 13410 | 0.0 | - |
| 1.5057 | 13420 | 0.0005 | - |
| 1.5068 | 13430 | 0.0001 | - |
| 1.5079 | 13440 | 0.0002 | - |
| 1.5090 | 13450 | 0.0005 | - |
| 1.5102 | 13460 | 0.0011 | - |
| 1.5113 | 13470 | 0.0004 | - |
| 1.5124 | 13480 | 0.0001 | - |
| 1.5135 | 13490 | 0.0009 | - |
| 1.5146 | 13500 | 0.0002 | - |
| 1.5158 | 13510 | 0.0003 | - |
| 1.5169 | 13520 | 0.0001 | - |
| 1.5180 | 13530 | 0.0001 | - |
| 1.5191 | 13540 | 0.0002 | - |
| 1.5203 | 13550 | 0.0001 | - |
| 1.5214 | 13560 | 0.0 | - |
| 1.5225 | 13570 | 0.0 | - |
| 1.5236 | 13580 | 0.0 | - |
| 1.5247 | 13590 | 0.0019 | - |
| 1.5259 | 13600 | 0.004 | - |
| 1.5270 | 13610 | 0.0004 | - |
| 1.5281 | 13620 | 0.0001 | - |
| 1.5292 | 13630 | 0.0004 | - |
| 1.5303 | 13640 | 0.0001 | - |
| 1.5315 | 13650 | 0.0003 | - |
| 1.5326 | 13660 | 0.0001 | - |
| 1.5337 | 13670 | 0.0001 | - |
| 1.5348 | 13680 | 0.0002 | - |
| 1.5360 | 13690 | 0.0002 | - |
| 1.5371 | 13700 | 0.0001 | - |
| 1.5382 | 13710 | 0.0001 | - |
| 1.5393 | 13720 | 0.0001 | - |
| 1.5404 | 13730 | 0.0001 | - |
| 1.5416 | 13740 | 0.0001 | - |
| 1.5427 | 13750 | 0.0001 | - |
| 1.5438 | 13760 | 0.0003 | - |
| 1.5449 | 13770 | 0.0001 | - |
| 1.5461 | 13780 | 0.0001 | - |
| 1.5472 | 13790 | 0.0002 | - |
| 1.5483 | 13800 | 0.0 | - |
| 1.5494 | 13810 | 0.0 | - |
| 1.5505 | 13820 | 0.0001 | - |
| 1.5517 | 13830 | 0.0002 | - |
| 1.5528 | 13840 | 0.0001 | - |
| 1.5539 | 13850 | 0.0002 | - |
| 1.5550 | 13860 | 0.0004 | - |
| 1.5562 | 13870 | 0.0001 | - |
| 1.5573 | 13880 | 0.0 | - |
| 1.5584 | 13890 | 0.0002 | - |
| 1.5595 | 13900 | 0.0003 | - |
| 1.5606 | 13910 | 0.0001 | - |
| 1.5618 | 13920 | 0.0001 | - |
| 1.5629 | 13930 | 0.0001 | - |
| 1.5640 | 13940 | 0.0 | - |
| 1.5651 | 13950 | 0.0003 | - |
| 1.5663 | 13960 | 0.0001 | - |
| 1.5674 | 13970 | 0.0007 | - |
| 1.5685 | 13980 | 0.0003 | - |
| 1.5696 | 13990 | 0.0006 | - |
| 1.5707 | 14000 | 0.0001 | - |
| 1.5719 | 14010 | 0.0009 | - |
| 1.5730 | 14020 | 0.0001 | - |
| 1.5741 | 14030 | 0.3245 | - |
| 1.5752 | 14040 | 0.0003 | - |
| 1.5763 | 14050 | 0.0003 | - |
| 1.5775 | 14060 | 0.0001 | - |
| 1.5786 | 14070 | 0.0001 | - |
| 1.5797 | 14080 | 0.0001 | - |
| 1.5808 | 14090 | 0.0001 | - |
| 1.5820 | 14100 | 0.0 | - |
| 1.5831 | 14110 | 0.0001 | - |
| 1.5842 | 14120 | 0.0017 | - |
| 1.5853 | 14130 | 0.0001 | - |
| 1.5864 | 14140 | 0.0001 | - |
| 1.5876 | 14150 | 0.0002 | - |
| 1.5887 | 14160 | 0.0004 | - |
| 1.5898 | 14170 | 0.0001 | - |
| 1.5909 | 14180 | 0.0005 | - |
| 1.5921 | 14190 | 0.0002 | - |
| 1.5932 | 14200 | 0.0002 | - |
| 1.5943 | 14210 | 0.0134 | - |
| 1.5954 | 14220 | 0.0001 | - |
| 1.5965 | 14230 | 0.0005 | - |
| 1.5977 | 14240 | 0.0001 | - |
| 1.5988 | 14250 | 0.0 | - |
| 1.5999 | 14260 | 0.0001 | - |
| 1.6010 | 14270 | 0.0004 | - |
| 1.6022 | 14280 | 0.0004 | - |
| 1.6033 | 14290 | 0.0001 | - |
| 1.6044 | 14300 | 0.0001 | - |
| 1.6055 | 14310 | 0.0012 | - |
| 1.6066 | 14320 | 0.0003 | - |
| 1.6078 | 14330 | 0.0001 | - |
| 1.6089 | 14340 | 0.0001 | - |
| 1.6100 | 14350 | 0.0004 | - |
| 1.6111 | 14360 | 0.0012 | - |
| 1.6123 | 14370 | 0.0002 | - |
| 1.6134 | 14380 | 0.001 | - |
| 1.6145 | 14390 | 0.0002 | - |
| 1.6156 | 14400 | 0.0004 | - |
| 1.6167 | 14410 | 0.0001 | - |
| 1.6179 | 14420 | 0.0001 | - |
| 1.6190 | 14430 | 0.0024 | - |
| 1.6201 | 14440 | 0.0002 | - |
| 1.6212 | 14450 | 0.0001 | - |
| 1.6223 | 14460 | 0.0001 | - |
| 1.6235 | 14470 | 0.0002 | - |
| 1.6246 | 14480 | 0.0001 | - |
| 1.6257 | 14490 | 0.0 | - |
| 1.6268 | 14500 | 0.0 | - |
| 1.6280 | 14510 | 0.0 | - |
| 1.6291 | 14520 | 0.0002 | - |
| 1.6302 | 14530 | 0.0001 | - |
| 1.6313 | 14540 | 0.0005 | - |
| 1.6324 | 14550 | 0.0 | - |
| 1.6336 | 14560 | 0.0001 | - |
| 1.6347 | 14570 | 0.0001 | - |
| 1.6358 | 14580 | 0.0001 | - |
| 1.6369 | 14590 | 0.0 | - |
| 1.6381 | 14600 | 0.0002 | - |
| 1.6392 | 14610 | 0.0004 | - |
| 1.6403 | 14620 | 0.0001 | - |
| 1.6414 | 14630 | 0.0009 | - |
| 1.6425 | 14640 | 0.0001 | - |
| 1.6437 | 14650 | 0.0001 | - |
| 1.6448 | 14660 | 0.0001 | - |
| 1.6459 | 14670 | 0.0003 | - |
| 1.6470 | 14680 | 0.0003 | - |
| 1.6482 | 14690 | 0.0 | - |
| 1.6493 | 14700 | 0.0049 | - |
| 1.6504 | 14710 | 0.0001 | - |
| 1.6515 | 14720 | 0.0 | - |
| 1.6526 | 14730 | 0.0002 | - |
| 1.6538 | 14740 | 0.0001 | - |
| 1.6549 | 14750 | 0.0001 | - |
| 1.6560 | 14760 | 0.0001 | - |
| 1.6571 | 14770 | 0.0002 | - |
| 1.6583 | 14780 | 0.0001 | - |
| 1.6594 | 14790 | 0.0002 | - |
| 1.6605 | 14800 | 0.0 | - |
| 1.6616 | 14810 | 0.0001 | - |
| 1.6627 | 14820 | 0.0 | - |
| 1.6639 | 14830 | 0.0001 | - |
| 1.6650 | 14840 | 0.0001 | - |
| 1.6661 | 14850 | 0.0001 | - |
| 1.6672 | 14860 | 0.0001 | - |
| 1.6683 | 14870 | 0.0 | - |
| 1.6695 | 14880 | 0.0001 | - |
| 1.6706 | 14890 | 0.0002 | - |
| 1.6717 | 14900 | 0.0001 | - |
| 1.6728 | 14910 | 0.0002 | - |
| 1.6740 | 14920 | 0.0002 | - |
| 1.6751 | 14930 | 0.0 | - |
| 1.6762 | 14940 | 0.0 | - |
| 1.6773 | 14950 | 0.0004 | - |
| 1.6784 | 14960 | 0.0001 | - |
| 1.6796 | 14970 | 0.0003 | - |
| 1.6807 | 14980 | 0.0 | - |
| 1.6818 | 14990 | 0.0001 | - |
| 1.6829 | 15000 | 0.0001 | - |
| 1.6841 | 15010 | 0.0001 | - |
| 1.6852 | 15020 | 0.0001 | - |
| 1.6863 | 15030 | 0.0007 | - |
| 1.6874 | 15040 | 0.0005 | - |
| 1.6885 | 15050 | 0.0001 | - |
| 1.6897 | 15060 | 0.0001 | - |
| 1.6908 | 15070 | 0.0003 | - |
| 1.6919 | 15080 | 0.0001 | - |
| 1.6930 | 15090 | 0.0 | - |
| 1.6942 | 15100 | 0.0004 | - |
| 1.6953 | 15110 | 0.0042 | - |
| 1.6964 | 15120 | 0.0001 | - |
| 1.6975 | 15130 | 0.0005 | - |
| 1.6986 | 15140 | 0.0002 | - |
| 1.6998 | 15150 | 0.0003 | - |
| 1.7009 | 15160 | 0.0001 | - |
| 1.7020 | 15170 | 0.0001 | - |
| 1.7031 | 15180 | 0.0001 | - |
| 1.7043 | 15190 | 0.0004 | - |
| 1.7054 | 15200 | 0.0001 | - |
| 1.7065 | 15210 | 0.0003 | - |
| 1.7076 | 15220 | 0.0 | - |
| 1.7087 | 15230 | 0.0001 | - |
| 1.7099 | 15240 | 0.0001 | - |
| 1.7110 | 15250 | 0.0074 | - |
| 1.7121 | 15260 | 0.0012 | - |
| 1.7132 | 15270 | 0.3243 | - |
| 1.7143 | 15280 | 0.001 | - |
| 1.7155 | 15290 | 0.0 | - |
| 1.7166 | 15300 | 0.0 | - |
| 1.7177 | 15310 | 0.0001 | - |
| 1.7188 | 15320 | 0.0005 | - |
| 1.7200 | 15330 | 0.0001 | - |
| 1.7211 | 15340 | 0.0001 | - |
| 1.7222 | 15350 | 0.0002 | - |
| 1.7233 | 15360 | 0.0003 | - |
| 1.7244 | 15370 | 0.0 | - |
| 1.7256 | 15380 | 0.0 | - |
| 1.7267 | 15390 | 0.0002 | - |
| 1.7278 | 15400 | 0.0 | - |
| 1.7289 | 15410 | 0.0002 | - |
| 1.7301 | 15420 | 0.0001 | - |
| 1.7312 | 15430 | 0.0003 | - |
| 1.7323 | 15440 | 0.0001 | - |
| 1.7334 | 15450 | 0.0001 | - |
| 1.7345 | 15460 | 0.0 | - |
| 1.7357 | 15470 | 0.0001 | - |
| 1.7368 | 15480 | 0.0001 | - |
| 1.7379 | 15490 | 0.0001 | - |
| 1.7390 | 15500 | 0.0 | - |
| 1.7402 | 15510 | 0.0001 | - |
| 1.7413 | 15520 | 0.0002 | - |
| 1.7424 | 15530 | 0.3123 | - |
| 1.7435 | 15540 | 0.0001 | - |
| 1.7446 | 15550 | 0.0004 | - |
| 1.7458 | 15560 | 0.0002 | - |
| 1.7469 | 15570 | 0.0002 | - |
| 1.7480 | 15580 | 0.0003 | - |
| 1.7491 | 15590 | 0.0001 | - |
| 1.7503 | 15600 | 0.0001 | - |
| 1.7514 | 15610 | 0.0001 | - |
| 1.7525 | 15620 | 0.0002 | - |
| 1.7536 | 15630 | 0.0002 | - |
| 1.7547 | 15640 | 0.0001 | - |
| 1.7559 | 15650 | 0.0001 | - |
| 1.7570 | 15660 | 0.001 | - |
| 1.7581 | 15670 | 0.0001 | - |
| 1.7592 | 15680 | 0.0002 | - |
| 1.7604 | 15690 | 0.0002 | - |
| 1.7615 | 15700 | 0.0 | - |
| 1.7626 | 15710 | 0.0001 | - |
| 1.7637 | 15720 | 0.0003 | - |
| 1.7648 | 15730 | 0.0003 | - |
| 1.7660 | 15740 | 0.0001 | - |
| 1.7671 | 15750 | 0.0 | - |
| 1.7682 | 15760 | 0.0 | - |
| 1.7693 | 15770 | 0.0004 | - |
| 1.7704 | 15780 | 0.001 | - |
| 1.7716 | 15790 | 0.0002 | - |
| 1.7727 | 15800 | 0.0001 | - |
| 1.7738 | 15810 | 0.0003 | - |
| 1.7749 | 15820 | 0.0006 | - |
| 1.7761 | 15830 | 0.0002 | - |
| 1.7772 | 15840 | 0.0007 | - |
| 1.7783 | 15850 | 0.0013 | - |
| 1.7794 | 15860 | 0.0001 | - |
| 1.7805 | 15870 | 0.0001 | - |
| 1.7817 | 15880 | 0.0 | - |
| 1.7828 | 15890 | 0.0001 | - |
| 1.7839 | 15900 | 0.0005 | - |
| 1.7850 | 15910 | 0.0002 | - |
| 1.7862 | 15920 | 0.001 | - |
| 1.7873 | 15930 | 0.0002 | - |
| 1.7884 | 15940 | 0.0005 | - |
| 1.7895 | 15950 | 0.0002 | - |
| 1.7906 | 15960 | 0.0 | - |
| 1.7918 | 15970 | 0.0001 | - |
| 1.7929 | 15980 | 0.0002 | - |
| 1.7940 | 15990 | 0.0002 | - |
| 1.7951 | 16000 | 0.0001 | - |
| 1.7963 | 16010 | 0.0001 | - |
| 1.7974 | 16020 | 0.0025 | - |
| 1.7985 | 16030 | 0.0001 | - |
| 1.7996 | 16040 | 0.0001 | - |
| 1.8007 | 16050 | 0.0 | - |
| 1.8019 | 16060 | 0.0001 | - |
| 1.8030 | 16070 | 0.0001 | - |
| 1.8041 | 16080 | 0.0 | - |
| 1.8052 | 16090 | 0.0 | - |
| 1.8064 | 16100 | 0.0001 | - |
| 1.8075 | 16110 | 0.0 | - |
| 1.8086 | 16120 | 0.0001 | - |
| 1.8097 | 16130 | 0.0 | - |
| 1.8108 | 16140 | 0.0077 | - |
| 1.8120 | 16150 | 0.0002 | - |
| 1.8131 | 16160 | 0.0007 | - |
| 1.8142 | 16170 | 0.0003 | - |
| 1.8153 | 16180 | 0.0003 | - |
| 1.8164 | 16190 | 0.0 | - |
| 1.8176 | 16200 | 0.0002 | - |
| 1.8187 | 16210 | 0.0 | - |
| 1.8198 | 16220 | 0.0002 | - |
| 1.8209 | 16230 | 0.0001 | - |
| 1.8221 | 16240 | 0.0001 | - |
| 1.8232 | 16250 | 0.0001 | - |
| 1.8243 | 16260 | 0.0002 | - |
| 1.8254 | 16270 | 0.0 | - |
| 1.8265 | 16280 | 0.0 | - |
| 1.8277 | 16290 | 0.0 | - |
| 1.8288 | 16300 | 0.0003 | - |
| 1.8299 | 16310 | 0.0001 | - |
| 1.8310 | 16320 | 0.0 | - |
| 1.8322 | 16330 | 0.0002 | - |
| 1.8333 | 16340 | 0.0001 | - |
| 1.8344 | 16350 | 0.0001 | - |
| 1.8355 | 16360 | 0.0001 | - |
| 1.8366 | 16370 | 0.0002 | - |
| 1.8378 | 16380 | 0.0001 | - |
| 1.8389 | 16390 | 0.0002 | - |
| 1.8400 | 16400 | 0.0002 | - |
| 1.8411 | 16410 | 0.0 | - |
| 1.8423 | 16420 | 0.0001 | - |
| 1.8434 | 16430 | 0.0001 | - |
| 1.8445 | 16440 | 0.0001 | - |
| 1.8456 | 16450 | 0.0 | - |
| 1.8467 | 16460 | 0.0001 | - |
| 1.8479 | 16470 | 0.0001 | - |
| 1.8490 | 16480 | 0.0036 | - |
| 1.8501 | 16490 | 0.0001 | - |
| 1.8512 | 16500 | 0.0002 | - |
| 1.8524 | 16510 | 0.0003 | - |
| 1.8535 | 16520 | 0.0 | - |
| 1.8546 | 16530 | 0.0 | - |
| 1.8557 | 16540 | 0.0001 | - |
| 1.8568 | 16550 | 0.0001 | - |
| 1.8580 | 16560 | 0.0003 | - |
| 1.8591 | 16570 | 0.0001 | - |
| 1.8602 | 16580 | 0.0002 | - |
| 1.8613 | 16590 | 0.0008 | - |
| 1.8624 | 16600 | 0.0001 | - |
| 1.8636 | 16610 | 0.0001 | - |
| 1.8647 | 16620 | 0.0002 | - |
| 1.8658 | 16630 | 0.0 | - |
| 1.8669 | 16640 | 0.0001 | - |
| 1.8681 | 16650 | 0.0 | - |
| 1.8692 | 16660 | 0.0 | - |
| 1.8703 | 16670 | 0.0001 | - |
| 1.8714 | 16680 | 0.0001 | - |
| 1.8725 | 16690 | 0.0 | - |
| 1.8737 | 16700 | 0.0 | - |
| 1.8748 | 16710 | 0.0 | - |
| 1.8759 | 16720 | 0.0 | - |
| 1.8770 | 16730 | 0.0001 | - |
| 1.8782 | 16740 | 0.0001 | - |
| 1.8793 | 16750 | 0.0 | - |
| 1.8804 | 16760 | 0.0012 | - |
| 1.8815 | 16770 | 0.0001 | - |
| 1.8826 | 16780 | 0.0001 | - |
| 1.8838 | 16790 | 0.0001 | - |
| 1.8849 | 16800 | 0.0005 | - |
| 1.8860 | 16810 | 0.0 | - |
| 1.8871 | 16820 | 0.0002 | - |
| 1.8883 | 16830 | 0.0001 | - |
| 1.8894 | 16840 | 0.0 | - |
| 1.8905 | 16850 | 0.0001 | - |
| 1.8916 | 16860 | 0.0005 | - |
| 1.8927 | 16870 | 0.0 | - |
| 1.8939 | 16880 | 0.0002 | - |
| 1.8950 | 16890 | 0.0001 | - |
| 1.8961 | 16900 | 0.0001 | - |
| 1.8972 | 16910 | 0.0001 | - |
| 1.8984 | 16920 | 0.0001 | - |
| 1.8995 | 16930 | 0.0 | - |
| 1.9006 | 16940 | 0.0001 | - |
| 1.9017 | 16950 | 0.0001 | - |
| 1.9028 | 16960 | 0.0001 | - |
| 1.9040 | 16970 | 0.0001 | - |
| 1.9051 | 16980 | 0.0 | - |
| 1.9062 | 16990 | 0.0001 | - |
| 1.9073 | 17000 | 0.0001 | - |
| 1.9084 | 17010 | 0.0 | - |
| 1.9096 | 17020 | 0.0 | - |
| 1.9107 | 17030 | 0.0002 | - |
| 1.9118 | 17040 | 0.0025 | - |
| 1.9129 | 17050 | 0.0 | - |
| 1.9141 | 17060 | 0.0001 | - |
| 1.9152 | 17070 | 0.0001 | - |
| 1.9163 | 17080 | 0.0 | - |
| 1.9174 | 17090 | 0.0001 | - |
| 1.9185 | 17100 | 0.0 | - |
| 1.9197 | 17110 | 0.0006 | - |
| 1.9208 | 17120 | 0.0009 | - |
| 1.9219 | 17130 | 0.0007 | - |
| 1.9230 | 17140 | 0.0 | - |
| 1.9242 | 17150 | 0.0001 | - |
| 1.9253 | 17160 | 0.0 | - |
| 1.9264 | 17170 | 0.0 | - |
| 1.9275 | 17180 | 0.0001 | - |
| 1.9286 | 17190 | 0.0001 | - |
| 1.9298 | 17200 | 0.0001 | - |
| 1.9309 | 17210 | 0.0002 | - |
| 1.9320 | 17220 | 0.0 | - |
| 1.9331 | 17230 | 0.0011 | - |
| 1.9343 | 17240 | 0.0002 | - |
| 1.9354 | 17250 | 0.0001 | - |
| 1.9365 | 17260 | 0.0 | - |
| 1.9376 | 17270 | 0.0002 | - |
| 1.9387 | 17280 | 0.0002 | - |
| 1.9399 | 17290 | 0.0 | - |
| 1.9410 | 17300 | 0.0001 | - |
| 1.9421 | 17310 | 0.0001 | - |
| 1.9432 | 17320 | 0.0005 | - |
| 1.9444 | 17330 | 0.001 | - |
| 1.9455 | 17340 | 0.0 | - |
| 1.9466 | 17350 | 0.0 | - |
| 1.9477 | 17360 | 0.0 | - |
| 1.9488 | 17370 | 0.0 | - |
| 1.9500 | 17380 | 0.0 | - |
| 1.9511 | 17390 | 0.0004 | - |
| 1.9522 | 17400 | 0.0 | - |
| 1.9533 | 17410 | 0.0001 | - |
| 1.9544 | 17420 | 0.0003 | - |
| 1.9556 | 17430 | 0.0 | - |
| 1.9567 | 17440 | 0.0001 | - |
| 1.9578 | 17450 | 0.0001 | - |
| 1.9589 | 17460 | 0.0001 | - |
| 1.9601 | 17470 | 0.0001 | - |
| 1.9612 | 17480 | 0.0012 | - |
| 1.9623 | 17490 | 0.0009 | - |
| 1.9634 | 17500 | 0.0001 | - |
| 1.9645 | 17510 | 0.0002 | - |
| 1.9657 | 17520 | 0.0 | - |
| 1.9668 | 17530 | 0.0001 | - |
| 1.9679 | 17540 | 0.0001 | - |
| 1.9690 | 17550 | 0.0004 | - |
| 1.9702 | 17560 | 0.0002 | - |
| 1.9713 | 17570 | 0.0003 | - |
| 1.9724 | 17580 | 0.0001 | - |
| 1.9735 | 17590 | 0.0 | - |
| 1.9746 | 17600 | 0.0003 | - |
| 1.9758 | 17610 | 0.0002 | - |
| 1.9769 | 17620 | 0.0001 | - |
| 1.9780 | 17630 | 0.0002 | - |
| 1.9791 | 17640 | 0.0001 | - |
| 1.9803 | 17650 | 0.0 | - |
| 1.9814 | 17660 | 0.0001 | - |
| 1.9825 | 17670 | 0.0002 | - |
| 1.9836 | 17680 | 0.0 | - |
| 1.9847 | 17690 | 0.0 | - |
| 1.9859 | 17700 | 0.0002 | - |
| 1.9870 | 17710 | 0.0 | - |
| 1.9881 | 17720 | 0.0 | - |
| 1.9892 | 17730 | 0.0001 | - |
| 1.9904 | 17740 | 0.0003 | - |
| 1.9915 | 17750 | 0.0 | - |
| 1.9926 | 17760 | 0.0006 | - |
| 1.9937 | 17770 | 0.0001 | - |
| 1.9948 | 17780 | 0.0 | - |
| 1.9960 | 17790 | 0.0005 | - |
| 1.9971 | 17800 | 0.0009 | - |
| 1.9982 | 17810 | 0.0008 | - |
| 1.9993 | 17820 | 0.0001 | - |
| 2.0 | 17826 | - | 0.0282 |
| 2.0004 | 17830 | 0.0002 | - |
| 2.0016 | 17840 | 0.0001 | - |
| 2.0027 | 17850 | 0.0001 | - |
| 2.0038 | 17860 | 0.0 | - |
| 2.0049 | 17870 | 0.0001 | - |
| 2.0061 | 17880 | 0.0003 | - |
| 2.0072 | 17890 | 0.0002 | - |
| 2.0083 | 17900 | 0.0002 | - |
| 2.0094 | 17910 | 0.0001 | - |
| 2.0105 | 17920 | 0.0 | - |
| 2.0117 | 17930 | 0.0001 | - |
| 2.0128 | 17940 | 0.0005 | - |
| 2.0139 | 17950 | 0.0003 | - |
| 2.0150 | 17960 | 0.0 | - |
| 2.0162 | 17970 | 0.0002 | - |
| 2.0173 | 17980 | 0.0005 | - |
| 2.0184 | 17990 | 0.0002 | - |
| 2.0195 | 18000 | 0.0012 | - |
| 2.0206 | 18010 | 0.0011 | - |
| 2.0218 | 18020 | 0.0006 | - |
| 2.0229 | 18030 | 0.0001 | - |
| 2.0240 | 18040 | 0.0 | - |
| 2.0251 | 18050 | 0.0002 | - |
| 2.0263 | 18060 | 0.0003 | - |
| 2.0274 | 18070 | 0.0 | - |
| 2.0285 | 18080 | 0.0002 | - |
| 2.0296 | 18090 | 0.0 | - |
| 2.0307 | 18100 | 0.0001 | - |
| 2.0319 | 18110 | 0.0003 | - |
| 2.0330 | 18120 | 0.0 | - |
| 2.0341 | 18130 | 0.0001 | - |
| 2.0352 | 18140 | 0.0 | - |
| 2.0364 | 18150 | 0.0001 | - |
| 2.0375 | 18160 | 0.0002 | - |
| 2.0386 | 18170 | 0.0002 | - |
| 2.0397 | 18180 | 0.0003 | - |
| 2.0408 | 18190 | 0.0001 | - |
| 2.0420 | 18200 | 0.0004 | - |
| 2.0431 | 18210 | 0.0005 | - |
| 2.0442 | 18220 | 0.0 | - |
| 2.0453 | 18230 | 0.0003 | - |
| 2.0464 | 18240 | 0.0003 | - |
| 2.0476 | 18250 | 0.0 | - |
| 2.0487 | 18260 | 0.0002 | - |
| 2.0498 | 18270 | 0.0001 | - |
| 2.0509 | 18280 | 0.0002 | - |
| 2.0521 | 18290 | 0.0006 | - |
| 2.0532 | 18300 | 0.0002 | - |
| 2.0543 | 18310 | 0.0005 | - |
| 2.0554 | 18320 | 0.0001 | - |
| 2.0565 | 18330 | 0.0004 | - |
| 2.0577 | 18340 | 0.0 | - |
| 2.0588 | 18350 | 0.001 | - |
| 2.0599 | 18360 | 0.002 | - |
| 2.0610 | 18370 | 0.0003 | - |
| 2.0622 | 18380 | 0.0009 | - |
| 2.0633 | 18390 | 0.0001 | - |
| 2.0644 | 18400 | 0.0001 | - |
| 2.0655 | 18410 | 0.0003 | - |
| 2.0666 | 18420 | 0.0012 | - |
| 2.0678 | 18430 | 0.0009 | - |
| 2.0689 | 18440 | 0.0001 | - |
| 2.0700 | 18450 | 0.0001 | - |
| 2.0711 | 18460 | 0.0 | - |
| 2.0723 | 18470 | 0.0002 | - |
| 2.0734 | 18480 | 0.0006 | - |
| 2.0745 | 18490 | 0.0001 | - |
| 2.0756 | 18500 | 0.0001 | - |
| 2.0767 | 18510 | 0.0 | - |
| 2.0779 | 18520 | 0.0001 | - |
| 2.0790 | 18530 | 0.0002 | - |
| 2.0801 | 18540 | 0.0001 | - |
| 2.0812 | 18550 | 0.0 | - |
| 2.0824 | 18560 | 0.0004 | - |
| 2.0835 | 18570 | 0.0001 | - |
| 2.0846 | 18580 | 0.0034 | - |
| 2.0857 | 18590 | 0.0001 | - |
| 2.0868 | 18600 | 0.0002 | - |
| 2.0880 | 18610 | 0.0001 | - |
| 2.0891 | 18620 | 0.0002 | - |
| 2.0902 | 18630 | 0.0001 | - |
| 2.0913 | 18640 | 0.0001 | - |
| 2.0924 | 18650 | 0.0 | - |
| 2.0936 | 18660 | 0.0001 | - |
| 2.0947 | 18670 | 0.0001 | - |
| 2.0958 | 18680 | 0.0002 | - |
| 2.0969 | 18690 | 0.0001 | - |
| 2.0981 | 18700 | 0.002 | - |
| 2.0992 | 18710 | 0.0001 | - |
| 2.1003 | 18720 | 0.0001 | - |
| 2.1014 | 18730 | 0.0004 | - |
| 2.1025 | 18740 | 0.0004 | - |
| 2.1037 | 18750 | 0.0001 | - |
| 2.1048 | 18760 | 0.0003 | - |
| 2.1059 | 18770 | 0.0006 | - |
| 2.1070 | 18780 | 0.0001 | - |
| 2.1082 | 18790 | 0.0001 | - |
| 2.1093 | 18800 | 0.0003 | - |
| 2.1104 | 18810 | 0.0 | - |
| 2.1115 | 18820 | 0.0 | - |
| 2.1126 | 18830 | 0.0 | - |
| 2.1138 | 18840 | 0.0001 | - |
| 2.1149 | 18850 | 0.0003 | - |
| 2.1160 | 18860 | 0.0012 | - |
| 2.1171 | 18870 | 0.0001 | - |
| 2.1183 | 18880 | 0.0 | - |
| 2.1194 | 18890 | 0.0001 | - |
| 2.1205 | 18900 | 0.0001 | - |
| 2.1216 | 18910 | 0.0001 | - |
| 2.1227 | 18920 | 0.0 | - |
| 2.1239 | 18930 | 0.0 | - |
| 2.1250 | 18940 | 0.0002 | - |
| 2.1261 | 18950 | 0.0001 | - |
| 2.1272 | 18960 | 0.0 | - |
| 2.1284 | 18970 | 0.0 | - |
| 2.1295 | 18980 | 0.0012 | - |
| 2.1306 | 18990 | 0.0 | - |
| 2.1317 | 19000 | 0.0002 | - |
| 2.1328 | 19010 | 0.0002 | - |
| 2.1340 | 19020 | 0.0002 | - |
| 2.1351 | 19030 | 0.0001 | - |
| 2.1362 | 19040 | 0.0002 | - |
| 2.1373 | 19050 | 0.0001 | - |
| 2.1384 | 19060 | 0.0004 | - |
| 2.1396 | 19070 | 0.0003 | - |
| 2.1407 | 19080 | 0.0 | - |
| 2.1418 | 19090 | 0.0003 | - |
| 2.1429 | 19100 | 0.0001 | - |
| 2.1441 | 19110 | 0.0001 | - |
| 2.1452 | 19120 | 0.0002 | - |
| 2.1463 | 19130 | 0.0007 | - |
| 2.1474 | 19140 | 0.0002 | - |
| 2.1485 | 19150 | 0.0002 | - |
| 2.1497 | 19160 | 0.0 | - |
| 2.1508 | 19170 | 0.0001 | - |
| 2.1519 | 19180 | 0.0 | - |
| 2.1530 | 19190 | 0.0 | - |
| 2.1542 | 19200 | 0.0005 | - |
| 2.1553 | 19210 | 0.0001 | - |
| 2.1564 | 19220 | 0.0001 | - |
| 2.1575 | 19230 | 0.0001 | - |
| 2.1586 | 19240 | 0.0004 | - |
| 2.1598 | 19250 | 0.0 | - |
| 2.1609 | 19260 | 0.0001 | - |
| 2.1620 | 19270 | 0.0003 | - |
| 2.1631 | 19280 | 0.0 | - |
| 2.1643 | 19290 | 0.0003 | - |
| 2.1654 | 19300 | 0.0 | - |
| 2.1665 | 19310 | 0.0027 | - |
| 2.1676 | 19320 | 0.0001 | - |
| 2.1687 | 19330 | 0.0 | - |
| 2.1699 | 19340 | 0.0003 | - |
| 2.1710 | 19350 | 0.0 | - |
| 2.1721 | 19360 | 0.0 | - |
| 2.1732 | 19370 | 0.0001 | - |
| 2.1744 | 19380 | 0.0001 | - |
| 2.1755 | 19390 | 0.0 | - |
| 2.1766 | 19400 | 0.0001 | - |
| 2.1777 | 19410 | 0.0001 | - |
| 2.1788 | 19420 | 0.0001 | - |
| 2.1800 | 19430 | 0.0 | - |
| 2.1811 | 19440 | 0.0001 | - |
| 2.1822 | 19450 | 0.0004 | - |
| 2.1833 | 19460 | 0.0001 | - |
| 2.1844 | 19470 | 0.0 | - |
| 2.1856 | 19480 | 0.0001 | - |
| 2.1867 | 19490 | 0.0001 | - |
| 2.1878 | 19500 | 0.0001 | - |
| 2.1889 | 19510 | 0.0002 | - |
| 2.1901 | 19520 | 0.0003 | - |
| 2.1912 | 19530 | 0.0002 | - |
| 2.1923 | 19540 | 0.0001 | - |
| 2.1934 | 19550 | 0.0001 | - |
| 2.1945 | 19560 | 0.0005 | - |
| 2.1957 | 19570 | 0.0001 | - |
| 2.1968 | 19580 | 0.0001 | - |
| 2.1979 | 19590 | 0.0002 | - |
| 2.1990 | 19600 | 0.0001 | - |
| 2.2002 | 19610 | 0.0 | - |
| 2.2013 | 19620 | 0.0 | - |
| 2.2024 | 19630 | 0.0001 | - |
| 2.2035 | 19640 | 0.0003 | - |
| 2.2046 | 19650 | 0.0005 | - |
| 2.2058 | 19660 | 0.0 | - |
| 2.2069 | 19670 | 0.0001 | - |
| 2.2080 | 19680 | 0.0 | - |
| 2.2091 | 19690 | 0.0 | - |
| 2.2103 | 19700 | 0.0006 | - |
| 2.2114 | 19710 | 0.0025 | - |
| 2.2125 | 19720 | 0.0001 | - |
| 2.2136 | 19730 | 0.0006 | - |
| 2.2147 | 19740 | 0.0001 | - |
| 2.2159 | 19750 | 0.0001 | - |
| 2.2170 | 19760 | 0.0001 | - |
| 2.2181 | 19770 | 0.0002 | - |
| 2.2192 | 19780 | 0.0001 | - |
| 2.2204 | 19790 | 0.0003 | - |
| 2.2215 | 19800 | 0.0001 | - |
| 2.2226 | 19810 | 0.0001 | - |
| 2.2237 | 19820 | 0.0003 | - |
| 2.2248 | 19830 | 0.0001 | - |
| 2.2260 | 19840 | 0.0001 | - |
| 2.2271 | 19850 | 0.0003 | - |
| 2.2282 | 19860 | 0.0107 | - |
| 2.2293 | 19870 | 0.0066 | - |
| 2.2304 | 19880 | 0.0001 | - |
| 2.2316 | 19890 | 0.0 | - |
| 2.2327 | 19900 | 0.0 | - |
| 2.2338 | 19910 | 0.0001 | - |
| 2.2349 | 19920 | 0.0008 | - |
| 2.2361 | 19930 | 0.0 | - |
| 2.2372 | 19940 | 0.0001 | - |
| 2.2383 | 19950 | 0.0005 | - |
| 2.2394 | 19960 | 0.0 | - |
| 2.2405 | 19970 | 0.0028 | - |
| 2.2417 | 19980 | 0.0001 | - |
| 2.2428 | 19990 | 0.0 | - |
| 2.2439 | 20000 | 0.0001 | - |
| 2.2450 | 20010 | 0.0 | - |
| 2.2462 | 20020 | 0.0001 | - |
| 2.2473 | 20030 | 0.0002 | - |
| 2.2484 | 20040 | 0.0001 | - |
| 2.2495 | 20050 | 0.0003 | - |
| 2.2506 | 20060 | 0.0003 | - |
| 2.2518 | 20070 | 0.0001 | - |
| 2.2529 | 20080 | 0.0001 | - |
| 2.2540 | 20090 | 0.0001 | - |
| 2.2551 | 20100 | 0.0001 | - |
| 2.2563 | 20110 | 0.0002 | - |
| 2.2574 | 20120 | 0.0 | - |
| 2.2585 | 20130 | 0.0 | - |
| 2.2596 | 20140 | 0.0 | - |
| 2.2607 | 20150 | 0.0001 | - |
| 2.2619 | 20160 | 0.0 | - |
| 2.2630 | 20170 | 0.0 | - |
| 2.2641 | 20180 | 0.0 | - |
| 2.2652 | 20190 | 0.0 | - |
| 2.2664 | 20200 | 0.0001 | - |
| 2.2675 | 20210 | 0.0003 | - |
| 2.2686 | 20220 | 0.0001 | - |
| 2.2697 | 20230 | 0.0 | - |
| 2.2708 | 20240 | 0.0001 | - |
| 2.2720 | 20250 | 0.0 | - |
| 2.2731 | 20260 | 0.0001 | - |
| 2.2742 | 20270 | 0.0001 | - |
| 2.2753 | 20280 | 0.0001 | - |
| 2.2765 | 20290 | 0.0007 | - |
| 2.2776 | 20300 | 0.0 | - |
| 2.2787 | 20310 | 0.0 | - |
| 2.2798 | 20320 | 0.0006 | - |
| 2.2809 | 20330 | 0.0 | - |
| 2.2821 | 20340 | 0.0001 | - |
| 2.2832 | 20350 | 0.0002 | - |
| 2.2843 | 20360 | 0.0001 | - |
| 2.2854 | 20370 | 0.0001 | - |
| 2.2865 | 20380 | 0.0001 | - |
| 2.2877 | 20390 | 0.0002 | - |
| 2.2888 | 20400 | 0.0002 | - |
| 2.2899 | 20410 | 0.0001 | - |
| 2.2910 | 20420 | 0.0001 | - |
| 2.2922 | 20430 | 0.0 | - |
| 2.2933 | 20440 | 0.0001 | - |
| 2.2944 | 20450 | 0.0001 | - |
| 2.2955 | 20460 | 0.0003 | - |
| 2.2966 | 20470 | 0.0001 | - |
| 2.2978 | 20480 | 0.0 | - |
| 2.2989 | 20490 | 0.0003 | - |
| 2.3000 | 20500 | 0.0002 | - |
| 2.3011 | 20510 | 0.0 | - |
| 2.3023 | 20520 | 0.0002 | - |
| 2.3034 | 20530 | 0.0006 | - |
| 2.3045 | 20540 | 0.0003 | - |
| 2.3056 | 20550 | 0.0002 | - |
| 2.3067 | 20560 | 0.0 | - |
| 2.3079 | 20570 | 0.0003 | - |
| 2.3090 | 20580 | 0.0 | - |
| 2.3101 | 20590 | 0.0001 | - |
| 2.3112 | 20600 | 0.0003 | - |
| 2.3124 | 20610 | 0.0001 | - |
| 2.3135 | 20620 | 0.0002 | - |
| 2.3146 | 20630 | 0.0001 | - |
| 2.3157 | 20640 | 0.001 | - |
| 2.3168 | 20650 | 0.0001 | - |
| 2.3180 | 20660 | 0.0001 | - |
| 2.3191 | 20670 | 0.0012 | - |
| 2.3202 | 20680 | 0.0 | - |
| 2.3213 | 20690 | 0.0001 | - |
| 2.3225 | 20700 | 0.0 | - |
| 2.3236 | 20710 | 0.0 | - |
| 2.3247 | 20720 | 0.0001 | - |
| 2.3258 | 20730 | 0.0008 | - |
| 2.3269 | 20740 | 0.0003 | - |
| 2.3281 | 20750 | 0.0011 | - |
| 2.3292 | 20760 | 0.0002 | - |
| 2.3303 | 20770 | 0.0001 | - |
| 2.3314 | 20780 | 0.0003 | - |
| 2.3325 | 20790 | 0.0004 | - |
| 2.3337 | 20800 | 0.0003 | - |
| 2.3348 | 20810 | 0.0001 | - |
| 2.3359 | 20820 | 0.0 | - |
| 2.3370 | 20830 | 0.0 | - |
| 2.3382 | 20840 | 0.0001 | - |
| 2.3393 | 20850 | 0.0004 | - |
| 2.3404 | 20860 | 0.0003 | - |
| 2.3415 | 20870 | 0.0002 | - |
| 2.3426 | 20880 | 0.0 | - |
| 2.3438 | 20890 | 0.0001 | - |
| 2.3449 | 20900 | 0.0 | - |
| 2.3460 | 20910 | 0.0 | - |
| 2.3471 | 20920 | 0.0001 | - |
| 2.3483 | 20930 | 0.0 | - |
| 2.3494 | 20940 | 0.0 | - |
| 2.3505 | 20950 | 0.0001 | - |
| 2.3516 | 20960 | 0.0 | - |
| 2.3527 | 20970 | 0.0 | - |
| 2.3539 | 20980 | 0.0 | - |
| 2.3550 | 20990 | 0.0001 | - |
| 2.3561 | 21000 | 0.0007 | - |
| 2.3572 | 21010 | 0.0001 | - |
| 2.3584 | 21020 | 0.0002 | - |
| 2.3595 | 21030 | 0.0 | - |
| 2.3606 | 21040 | 0.0002 | - |
| 2.3617 | 21050 | 0.0001 | - |
| 2.3628 | 21060 | 0.0001 | - |
| 2.3640 | 21070 | 0.0002 | - |
| 2.3651 | 21080 | 0.0002 | - |
| 2.3662 | 21090 | 0.0002 | - |
| 2.3673 | 21100 | 0.0 | - |
| 2.3685 | 21110 | 0.0001 | - |
| 2.3696 | 21120 | 0.0 | - |
| 2.3707 | 21130 | 0.0003 | - |
| 2.3718 | 21140 | 0.0001 | - |
| 2.3729 | 21150 | 0.0001 | - |
| 2.3741 | 21160 | 0.0002 | - |
| 2.3752 | 21170 | 0.0 | - |
| 2.3763 | 21180 | 0.0 | - |
| 2.3774 | 21190 | 0.0001 | - |
| 2.3785 | 21200 | 0.0 | - |
| 2.3797 | 21210 | 0.0003 | - |
| 2.3808 | 21220 | 0.0 | - |
| 2.3819 | 21230 | 0.0 | - |
| 2.3830 | 21240 | 0.0 | - |
| 2.3842 | 21250 | 0.0 | - |
| 2.3853 | 21260 | 0.0003 | - |
| 2.3864 | 21270 | 0.0003 | - |
| 2.3875 | 21280 | 0.0 | - |
| 2.3886 | 21290 | 0.0002 | - |
| 2.3898 | 21300 | 0.0001 | - |
| 2.3909 | 21310 | 0.0 | - |
| 2.3920 | 21320 | 0.0 | - |
| 2.3931 | 21330 | 0.0005 | - |
| 2.3943 | 21340 | 0.0001 | - |
| 2.3954 | 21350 | 0.0002 | - |
| 2.3965 | 21360 | 0.0006 | - |
| 2.3976 | 21370 | 0.0001 | - |
| 2.3987 | 21380 | 0.0001 | - |
| 2.3999 | 21390 | 0.0 | - |
| 2.4010 | 21400 | 0.0001 | - |
| 2.4021 | 21410 | 0.0001 | - |
| 2.4032 | 21420 | 0.0005 | - |
| 2.4044 | 21430 | 0.0004 | - |
| 2.4055 | 21440 | 0.0001 | - |
| 2.4066 | 21450 | 0.0001 | - |
| 2.4077 | 21460 | 0.0 | - |
| 2.4088 | 21470 | 0.0 | - |
| 2.4100 | 21480 | 0.0004 | - |
| 2.4111 | 21490 | 0.0002 | - |
| 2.4122 | 21500 | 0.0003 | - |
| 2.4133 | 21510 | 0.0001 | - |
| 2.4145 | 21520 | 0.0001 | - |
| 2.4156 | 21530 | 0.0002 | - |
| 2.4167 | 21540 | 0.0001 | - |
| 2.4178 | 21550 | 0.0002 | - |
| 2.4189 | 21560 | 0.0001 | - |
| 2.4201 | 21570 | 0.0001 | - |
| 2.4212 | 21580 | 0.0001 | - |
| 2.4223 | 21590 | 0.0002 | - |
| 2.4234 | 21600 | 0.0 | - |
| 2.4245 | 21610 | 0.0001 | - |
| 2.4257 | 21620 | 0.0004 | - |
| 2.4268 | 21630 | 0.0009 | - |
| 2.4279 | 21640 | 0.0001 | - |
| 2.4290 | 21650 | 0.0001 | - |
| 2.4302 | 21660 | 0.0001 | - |
| 2.4313 | 21670 | 0.0001 | - |
| 2.4324 | 21680 | 0.0 | - |
| 2.4335 | 21690 | 0.0002 | - |
| 2.4346 | 21700 | 0.0001 | - |
| 2.4358 | 21710 | 0.0001 | - |
| 2.4369 | 21720 | 0.0 | - |
| 2.4380 | 21730 | 0.0001 | - |
| 2.4391 | 21740 | 0.0003 | - |
| 2.4403 | 21750 | 0.0001 | - |
| 2.4414 | 21760 | 0.0001 | - |
| 2.4425 | 21770 | 0.0 | - |
| 2.4436 | 21780 | 0.0 | - |
| 2.4447 | 21790 | 0.0 | - |
| 2.4459 | 21800 | 0.0001 | - |
| 2.4470 | 21810 | 0.0 | - |
| 2.4481 | 21820 | 0.0 | - |
| 2.4492 | 21830 | 0.0001 | - |
| 2.4504 | 21840 | 0.0005 | - |
| 2.4515 | 21850 | 0.0 | - |
| 2.4526 | 21860 | 0.0001 | - |
| 2.4537 | 21870 | 0.0001 | - |
| 2.4548 | 21880 | 0.0001 | - |
| 2.4560 | 21890 | 0.0001 | - |
| 2.4571 | 21900 | 0.0 | - |
| 2.4582 | 21910 | 0.0002 | - |
| 2.4593 | 21920 | 0.0001 | - |
| 2.4605 | 21930 | 0.0004 | - |
| 2.4616 | 21940 | 0.0026 | - |
| 2.4627 | 21950 | 0.0001 | - |
| 2.4638 | 21960 | 0.0004 | - |
| 2.4649 | 21970 | 0.0001 | - |
| 2.4661 | 21980 | 0.0002 | - |
| 2.4672 | 21990 | 0.0 | - |
| 2.4683 | 22000 | 0.0001 | - |
| 2.4694 | 22010 | 0.0001 | - |
| 2.4705 | 22020 | 0.0 | - |
| 2.4717 | 22030 | 0.0001 | - |
| 2.4728 | 22040 | 0.0001 | - |
| 2.4739 | 22050 | 0.0014 | - |
| 2.4750 | 22060 | 0.0001 | - |
| 2.4762 | 22070 | 0.0006 | - |
| 2.4773 | 22080 | 0.0 | - |
| 2.4784 | 22090 | 0.0002 | - |
| 2.4795 | 22100 | 0.0 | - |
| 2.4806 | 22110 | 0.0001 | - |
| 2.4818 | 22120 | 0.0 | - |
| 2.4829 | 22130 | 0.0001 | - |
| 2.4840 | 22140 | 0.0001 | - |
| 2.4851 | 22150 | 0.0001 | - |
| 2.4863 | 22160 | 0.0 | - |
| 2.4874 | 22170 | 0.0003 | - |
| 2.4885 | 22180 | 0.0 | - |
| 2.4896 | 22190 | 0.0002 | - |
| 2.4907 | 22200 | 0.0 | - |
| 2.4919 | 22210 | 0.0012 | - |
| 2.4930 | 22220 | 0.0 | - |
| 2.4941 | 22230 | 0.0001 | - |
| 2.4952 | 22240 | 0.0 | - |
| 2.4964 | 22250 | 0.0001 | - |
| 2.4975 | 22260 | 0.0002 | - |
| 2.4986 | 22270 | 0.0 | - |
| 2.4997 | 22280 | 0.0001 | - |
| 2.5008 | 22290 | 0.0002 | - |
| 2.5020 | 22300 | 0.0 | - |
| 2.5031 | 22310 | 0.0001 | - |
| 2.5042 | 22320 | 0.0 | - |
| 2.5053 | 22330 | 0.0001 | - |
| 2.5065 | 22340 | 0.0003 | - |
| 2.5076 | 22350 | 0.0 | - |
| 2.5087 | 22360 | 0.0001 | - |
| 2.5098 | 22370 | 0.0001 | - |
| 2.5109 | 22380 | 0.0001 | - |
| 2.5121 | 22390 | 0.0 | - |
| 2.5132 | 22400 | 0.0001 | - |
| 2.5143 | 22410 | 0.0001 | - |
| 2.5154 | 22420 | 0.0 | - |
| 2.5165 | 22430 | 0.0001 | - |
| 2.5177 | 22440 | 0.0 | - |
| 2.5188 | 22450 | 0.0001 | - |
| 2.5199 | 22460 | 0.0 | - |
| 2.5210 | 22470 | 0.0001 | - |
| 2.5222 | 22480 | 0.0001 | - |
| 2.5233 | 22490 | 0.0003 | - |
| 2.5244 | 22500 | 0.0001 | - |
| 2.5255 | 22510 | 0.0 | - |
| 2.5266 | 22520 | 0.0 | - |
| 2.5278 | 22530 | 0.0004 | - |
| 2.5289 | 22540 | 0.0001 | - |
| 2.5300 | 22550 | 0.0002 | - |
| 2.5311 | 22560 | 0.0 | - |
| 2.5323 | 22570 | 0.0 | - |
| 2.5334 | 22580 | 0.0001 | - |
| 2.5345 | 22590 | 0.0002 | - |
| 2.5356 | 22600 | 0.0 | - |
| 2.5367 | 22610 | 0.0002 | - |
| 2.5379 | 22620 | 0.0002 | - |
| 2.5390 | 22630 | 0.0 | - |
| 2.5401 | 22640 | 0.0001 | - |
| 2.5412 | 22650 | 0.0 | - |
| 2.5424 | 22660 | 0.0001 | - |
| 2.5435 | 22670 | 0.0 | - |
| 2.5446 | 22680 | 0.0 | - |
| 2.5457 | 22690 | 0.0001 | - |
| 2.5468 | 22700 | 0.0002 | - |
| 2.5480 | 22710 | 0.0 | - |
| 2.5491 | 22720 | 0.0 | - |
| 2.5502 | 22730 | 0.0001 | - |
| 2.5513 | 22740 | 0.0005 | - |
| 2.5525 | 22750 | 0.0004 | - |
| 2.5536 | 22760 | 0.0002 | - |
| 2.5547 | 22770 | 0.0001 | - |
| 2.5558 | 22780 | 0.0001 | - |
| 2.5569 | 22790 | 0.0008 | - |
| 2.5581 | 22800 | 0.0002 | - |
| 2.5592 | 22810 | 0.0001 | - |
| 2.5603 | 22820 | 0.0 | - |
| 2.5614 | 22830 | 0.0 | - |
| 2.5625 | 22840 | 0.0002 | - |
| 2.5637 | 22850 | 0.0001 | - |
| 2.5648 | 22860 | 0.0003 | - |
| 2.5659 | 22870 | 0.0 | - |
| 2.5670 | 22880 | 0.0 | - |
| 2.5682 | 22890 | 0.0004 | - |
| 2.5693 | 22900 | 0.0001 | - |
| 2.5704 | 22910 | 0.0004 | - |
| 2.5715 | 22920 | 0.0001 | - |
| 2.5726 | 22930 | 0.0001 | - |
| 2.5738 | 22940 | 0.0001 | - |
| 2.5749 | 22950 | 0.0 | - |
| 2.5760 | 22960 | 0.0002 | - |
| 2.5771 | 22970 | 0.0 | - |
| 2.5783 | 22980 | 0.0001 | - |
| 2.5794 | 22990 | 0.0001 | - |
| 2.5805 | 23000 | 0.0002 | - |
| 2.5816 | 23010 | 0.0001 | - |
| 2.5827 | 23020 | 0.0001 | - |
| 2.5839 | 23030 | 0.0 | - |
| 2.5850 | 23040 | 0.0 | - |
| 2.5861 | 23050 | 0.0001 | - |
| 2.5872 | 23060 | 0.0 | - |
| 2.5884 | 23070 | 0.0001 | - |
| 2.5895 | 23080 | 0.0 | - |
| 2.5906 | 23090 | 0.0 | - |
| 2.5917 | 23100 | 0.0001 | - |
| 2.5928 | 23110 | 0.0001 | - |
| 2.5940 | 23120 | 0.0001 | - |
| 2.5951 | 23130 | 0.0001 | - |
| 2.5962 | 23140 | 0.0001 | - |
| 2.5973 | 23150 | 0.0001 | - |
| 2.5985 | 23160 | 0.0003 | - |
| 2.5996 | 23170 | 0.0 | - |
| 2.6007 | 23180 | 0.0001 | - |
| 2.6018 | 23190 | 0.0 | - |
| 2.6029 | 23200 | 0.0001 | - |
| 2.6041 | 23210 | 0.0001 | - |
| 2.6052 | 23220 | 0.0 | - |
| 2.6063 | 23230 | 0.0 | - |
| 2.6074 | 23240 | 0.0001 | - |
| 2.6085 | 23250 | 0.0001 | - |
| 2.6097 | 23260 | 0.0 | - |
| 2.6108 | 23270 | 0.0001 | - |
| 2.6119 | 23280 | 0.0001 | - |
| 2.6130 | 23290 | 0.0 | - |
| 2.6142 | 23300 | 0.0002 | - |
| 2.6153 | 23310 | 0.0003 | - |
| 2.6164 | 23320 | 0.0 | - |
| 2.6175 | 23330 | 0.0002 | - |
| 2.6186 | 23340 | 0.0001 | - |
| 2.6198 | 23350 | 0.0001 | - |
| 2.6209 | 23360 | 0.0 | - |
| 2.6220 | 23370 | 0.0 | - |
| 2.6231 | 23380 | 0.0002 | - |
| 2.6243 | 23390 | 0.0001 | - |
| 2.6254 | 23400 | 0.0001 | - |
| 2.6265 | 23410 | 0.0001 | - |
| 2.6276 | 23420 | 0.0001 | - |
| 2.6287 | 23430 | 0.0001 | - |
| 2.6299 | 23440 | 0.0002 | - |
| 2.6310 | 23450 | 0.0 | - |
| 2.6321 | 23460 | 0.0 | - |
| 2.6332 | 23470 | 0.0001 | - |
| 2.6344 | 23480 | 0.0 | - |
| 2.6355 | 23490 | 0.0 | - |
| 2.6366 | 23500 | 0.0003 | - |
| 2.6377 | 23510 | 0.0001 | - |
| 2.6388 | 23520 | 0.0001 | - |
| 2.6400 | 23530 | 0.0001 | - |
| 2.6411 | 23540 | 0.0 | - |
| 2.6422 | 23550 | 0.0 | - |
| 2.6433 | 23560 | 0.0001 | - |
| 2.6445 | 23570 | 0.0 | - |
| 2.6456 | 23580 | 0.0002 | - |
| 2.6467 | 23590 | 0.0 | - |
| 2.6478 | 23600 | 0.0001 | - |
| 2.6489 | 23610 | 0.0 | - |
| 2.6501 | 23620 | 0.0001 | - |
| 2.6512 | 23630 | 0.0001 | - |
| 2.6523 | 23640 | 0.0001 | - |
| 2.6534 | 23650 | 0.0 | - |
| 2.6545 | 23660 | 0.0003 | - |
| 2.6557 | 23670 | 0.0 | - |
| 2.6568 | 23680 | 0.0 | - |
| 2.6579 | 23690 | 0.0001 | - |
| 2.6590 | 23700 | 0.0 | - |
| 2.6602 | 23710 | 0.0001 | - |
| 2.6613 | 23720 | 0.0 | - |
| 2.6624 | 23730 | 0.0001 | - |
| 2.6635 | 23740 | 0.0001 | - |
| 2.6646 | 23750 | 0.0001 | - |
| 2.6658 | 23760 | 0.0 | - |
| 2.6669 | 23770 | 0.0 | - |
| 2.6680 | 23780 | 0.0004 | - |
| 2.6691 | 23790 | 0.0 | - |
| 2.6703 | 23800 | 0.0001 | - |
| 2.6714 | 23810 | 0.0 | - |
| 2.6725 | 23820 | 0.0001 | - |
| 2.6736 | 23830 | 0.0 | - |
| 2.6747 | 23840 | 0.0 | - |
| 2.6759 | 23850 | 0.0 | - |
| 2.6770 | 23860 | 0.0 | - |
| 2.6781 | 23870 | 0.0001 | - |
| 2.6792 | 23880 | 0.0004 | - |
| 2.6804 | 23890 | 0.0001 | - |
| 2.6815 | 23900 | 0.0001 | - |
| 2.6826 | 23910 | 0.0001 | - |
| 2.6837 | 23920 | 0.0001 | - |
| 2.6848 | 23930 | 0.0005 | - |
| 2.6860 | 23940 | 0.0003 | - |
| 2.6871 | 23950 | 0.0005 | - |
| 2.6882 | 23960 | 0.0 | - |
| 2.6893 | 23970 | 0.0 | - |
| 2.6905 | 23980 | 0.0003 | - |
| 2.6916 | 23990 | 0.0001 | - |
| 2.6927 | 24000 | 0.0 | - |
| 2.6938 | 24010 | 0.0 | - |
| 2.6949 | 24020 | 0.0001 | - |
| 2.6961 | 24030 | 0.0006 | - |
| 2.6972 | 24040 | 0.0002 | - |
| 2.6983 | 24050 | 0.0003 | - |
| 2.6994 | 24060 | 0.0001 | - |
| 2.7005 | 24070 | 0.0001 | - |
| 2.7017 | 24080 | 0.0 | - |
| 2.7028 | 24090 | 0.0 | - |
| 2.7039 | 24100 | 0.0001 | - |
| 2.7050 | 24110 | 0.0001 | - |
| 2.7062 | 24120 | 0.0002 | - |
| 2.7073 | 24130 | 0.0 | - |
| 2.7084 | 24140 | 0.0 | - |
| 2.7095 | 24150 | 0.0001 | - |
| 2.7106 | 24160 | 0.0 | - |
| 2.7118 | 24170 | 0.0001 | - |
| 2.7129 | 24180 | 0.0 | - |
| 2.7140 | 24190 | 0.0001 | - |
| 2.7151 | 24200 | 0.0 | - |
| 2.7163 | 24210 | 0.0002 | - |
| 2.7174 | 24220 | 0.0 | - |
| 2.7185 | 24230 | 0.0004 | - |
| 2.7196 | 24240 | 0.0003 | - |
| 2.7207 | 24250 | 0.0003 | - |
| 2.7219 | 24260 | 0.0002 | - |
| 2.7230 | 24270 | 0.0 | - |
| 2.7241 | 24280 | 0.0001 | - |
| 2.7252 | 24290 | 0.0 | - |
| 2.7264 | 24300 | 0.0 | - |
| 2.7275 | 24310 | 0.0002 | - |
| 2.7286 | 24320 | 0.0001 | - |
| 2.7297 | 24330 | 0.0 | - |
| 2.7308 | 24340 | 0.0001 | - |
| 2.7320 | 24350 | 0.0001 | - |
| 2.7331 | 24360 | 0.0002 | - |
| 2.7342 | 24370 | 0.0 | - |
| 2.7353 | 24380 | 0.0002 | - |
| 2.7365 | 24390 | 0.0001 | - |
| 2.7376 | 24400 | 0.0002 | - |
| 2.7387 | 24410 | 0.0002 | - |
| 2.7398 | 24420 | 0.0 | - |
| 2.7409 | 24430 | 0.0 | - |
| 2.7421 | 24440 | 0.0001 | - |
| 2.7432 | 24450 | 0.0002 | - |
| 2.7443 | 24460 | 0.0001 | - |
| 2.7454 | 24470 | 0.0002 | - |
| 2.7465 | 24480 | 0.0001 | - |
| 2.7477 | 24490 | 0.0 | - |
| 2.7488 | 24500 | 0.0001 | - |
| 2.7499 | 24510 | 0.0 | - |
| 2.7510 | 24520 | 0.0001 | - |
| 2.7522 | 24530 | 0.0 | - |
| 2.7533 | 24540 | 0.0 | - |
| 2.7544 | 24550 | 0.0001 | - |
| 2.7555 | 24560 | 0.0001 | - |
| 2.7566 | 24570 | 0.0002 | - |
| 2.7578 | 24580 | 0.0001 | - |
| 2.7589 | 24590 | 0.0002 | - |
| 2.7600 | 24600 | 0.0 | - |
| 2.7611 | 24610 | 0.0 | - |
| 2.7623 | 24620 | 0.0006 | - |
| 2.7634 | 24630 | 0.0003 | - |
| 2.7645 | 24640 | 0.0 | - |
| 2.7656 | 24650 | 0.0 | - |
| 2.7667 | 24660 | 0.0 | - |
| 2.7679 | 24670 | 0.0001 | - |
| 2.7690 | 24680 | 0.0001 | - |
| 2.7701 | 24690 | 0.0002 | - |
| 2.7712 | 24700 | 0.0004 | - |
| 2.7724 | 24710 | 0.0 | - |
| 2.7735 | 24720 | 0.0 | - |
| 2.7746 | 24730 | 0.0001 | - |
| 2.7757 | 24740 | 0.0001 | - |
| 2.7768 | 24750 | 0.0 | - |
| 2.7780 | 24760 | 0.0001 | - |
| 2.7791 | 24770 | 0.0 | - |
| 2.7802 | 24780 | 0.0002 | - |
| 2.7813 | 24790 | 0.0 | - |
| 2.7825 | 24800 | 0.0 | - |
| 2.7836 | 24810 | 0.0 | - |
| 2.7847 | 24820 | 0.0 | - |
| 2.7858 | 24830 | 0.0 | - |
| 2.7869 | 24840 | 0.0 | - |
| 2.7881 | 24850 | 0.0004 | - |
| 2.7892 | 24860 | 0.0 | - |
| 2.7903 | 24870 | 0.0 | - |
| 2.7914 | 24880 | 0.0 | - |
| 2.7926 | 24890 | 0.0 | - |
| 2.7937 | 24900 | 0.0002 | - |
| 2.7948 | 24910 | 0.0001 | - |
| 2.7959 | 24920 | 0.0 | - |
| 2.7970 | 24930 | 0.0 | - |
| 2.7982 | 24940 | 0.0001 | - |
| 2.7993 | 24950 | 0.0 | - |
| 2.8004 | 24960 | 0.0 | - |
| 2.8015 | 24970 | 0.0001 | - |
| 2.8026 | 24980 | 0.0 | - |
| 2.8038 | 24990 | 0.0001 | - |
| 2.8049 | 25000 | 0.0 | - |
| 2.8060 | 25010 | 0.0001 | - |
| 2.8071 | 25020 | 0.0001 | - |
| 2.8083 | 25030 | 0.0 | - |
| 2.8094 | 25040 | 0.0 | - |
| 2.8105 | 25050 | 0.0 | - |
| 2.8116 | 25060 | 0.0 | - |
| 2.8127 | 25070 | 0.0001 | - |
| 2.8139 | 25080 | 0.0 | - |
| 2.8150 | 25090 | 0.0001 | - |
| 2.8161 | 25100 | 0.0003 | - |
| 2.8172 | 25110 | 0.0002 | - |
| 2.8184 | 25120 | 0.0 | - |
| 2.8195 | 25130 | 0.0 | - |
| 2.8206 | 25140 | 0.0 | - |
| 2.8217 | 25150 | 0.0 | - |
| 2.8228 | 25160 | 0.0001 | - |
| 2.8240 | 25170 | 0.0001 | - |
| 2.8251 | 25180 | 0.0001 | - |
| 2.8262 | 25190 | 0.0 | - |
| 2.8273 | 25200 | 0.0002 | - |
| 2.8285 | 25210 | 0.0 | - |
| 2.8296 | 25220 | 0.0003 | - |
| 2.8307 | 25230 | 0.0 | - |
| 2.8318 | 25240 | 0.0 | - |
| 2.8329 | 25250 | 0.0 | - |
| 2.8341 | 25260 | 0.0002 | - |
| 2.8352 | 25270 | 0.0 | - |
| 2.8363 | 25280 | 0.0001 | - |
| 2.8374 | 25290 | 0.0001 | - |
| 2.8386 | 25300 | 0.0001 | - |
| 2.8397 | 25310 | 0.0002 | - |
| 2.8408 | 25320 | 0.0002 | - |
| 2.8419 | 25330 | 0.0 | - |
| 2.8430 | 25340 | 0.0 | - |
| 2.8442 | 25350 | 0.0001 | - |
| 2.8453 | 25360 | 0.0 | - |
| 2.8464 | 25370 | 0.0 | - |
| 2.8475 | 25380 | 0.0001 | - |
| 2.8486 | 25390 | 0.0 | - |
| 2.8498 | 25400 | 0.0 | - |
| 2.8509 | 25410 | 0.0001 | - |
| 2.8520 | 25420 | 0.0001 | - |
| 2.8531 | 25430 | 0.0 | - |
| 2.8543 | 25440 | 0.0002 | - |
| 2.8554 | 25450 | 0.0002 | - |
| 2.8565 | 25460 | 0.0001 | - |
| 2.8576 | 25470 | 0.0001 | - |
| 2.8587 | 25480 | 0.0001 | - |
| 2.8599 | 25490 | 0.0001 | - |
| 2.8610 | 25500 | 0.0001 | - |
| 2.8621 | 25510 | 0.0001 | - |
| 2.8632 | 25520 | 0.0 | - |
| 2.8644 | 25530 | 0.0001 | - |
| 2.8655 | 25540 | 0.0 | - |
| 2.8666 | 25550 | 0.0 | - |
| 2.8677 | 25560 | 0.0 | - |
| 2.8688 | 25570 | 0.0 | - |
| 2.8700 | 25580 | 0.0001 | - |
| 2.8711 | 25590 | 0.0001 | - |
| 2.8722 | 25600 | 0.0 | - |
| 2.8733 | 25610 | 0.0001 | - |
| 2.8745 | 25620 | 0.0 | - |
| 2.8756 | 25630 | 0.0001 | - |
| 2.8767 | 25640 | 0.0001 | - |
| 2.8778 | 25650 | 0.0 | - |
| 2.8789 | 25660 | 0.0 | - |
| 2.8801 | 25670 | 0.0001 | - |
| 2.8812 | 25680 | 0.0003 | - |
| 2.8823 | 25690 | 0.0001 | - |
| 2.8834 | 25700 | 0.0 | - |
| 2.8846 | 25710 | 0.0001 | - |
| 2.8857 | 25720 | 0.0 | - |
| 2.8868 | 25730 | 0.0 | - |
| 2.8879 | 25740 | 0.0001 | - |
| 2.8890 | 25750 | 0.0002 | - |
| 2.8902 | 25760 | 0.0001 | - |
| 2.8913 | 25770 | 0.0 | - |
| 2.8924 | 25780 | 0.0002 | - |
| 2.8935 | 25790 | 0.0 | - |
| 2.8946 | 25800 | 0.0 | - |
| 2.8958 | 25810 | 0.0 | - |
| 2.8969 | 25820 | 0.0 | - |
| 2.8980 | 25830 | 0.0001 | - |
| 2.8991 | 25840 | 0.0 | - |
| 2.9003 | 25850 | 0.0001 | - |
| 2.9014 | 25860 | 0.0 | - |
| 2.9025 | 25870 | 0.0002 | - |
| 2.9036 | 25880 | 0.0001 | - |
| 2.9047 | 25890 | 0.0 | - |
| 2.9059 | 25900 | 0.0001 | - |
| 2.9070 | 25910 | 0.0 | - |
| 2.9081 | 25920 | 0.0001 | - |
| 2.9092 | 25930 | 0.0 | - |
| 2.9104 | 25940 | 0.0001 | - |
| 2.9115 | 25950 | 0.0 | - |
| 2.9126 | 25960 | 0.0001 | - |
| 2.9137 | 25970 | 0.0001 | - |
| 2.9148 | 25980 | 0.0001 | - |
| 2.9160 | 25990 | 0.0001 | - |
| 2.9171 | 26000 | 0.0 | - |
| 2.9182 | 26010 | 0.0 | - |
| 2.9193 | 26020 | 0.0001 | - |
| 2.9205 | 26030 | 0.0 | - |
| 2.9216 | 26040 | 0.0001 | - |
| 2.9227 | 26050 | 0.0001 | - |
| 2.9238 | 26060 | 0.0 | - |
| 2.9249 | 26070 | 0.0 | - |
| 2.9261 | 26080 | 0.0002 | - |
| 2.9272 | 26090 | 0.0001 | - |
| 2.9283 | 26100 | 0.0 | - |
| 2.9294 | 26110 | 0.0001 | - |
| 2.9306 | 26120 | 0.0 | - |
| 2.9317 | 26130 | 0.0001 | - |
| 2.9328 | 26140 | 0.0001 | - |
| 2.9339 | 26150 | 0.0001 | - |
| 2.9350 | 26160 | 0.0 | - |
| 2.9362 | 26170 | 0.0 | - |
| 2.9373 | 26180 | 0.0001 | - |
| 2.9384 | 26190 | 0.0001 | - |
| 2.9395 | 26200 | 0.0 | - |
| 2.9406 | 26210 | 0.0001 | - |
| 2.9418 | 26220 | 0.0 | - |
| 2.9429 | 26230 | 0.0 | - |
| 2.9440 | 26240 | 0.0001 | - |
| 2.9451 | 26250 | 0.0 | - |
| 2.9463 | 26260 | 0.0 | - |
| 2.9474 | 26270 | 0.0001 | - |
| 2.9485 | 26280 | 0.0 | - |
| 2.9496 | 26290 | 0.0 | - |
| 2.9507 | 26300 | 0.0003 | - |
| 2.9519 | 26310 | 0.0 | - |
| 2.9530 | 26320 | 0.0 | - |
| 2.9541 | 26330 | 0.0001 | - |
| 2.9552 | 26340 | 0.0 | - |
| 2.9564 | 26350 | 0.0001 | - |
| 2.9575 | 26360 | 0.0001 | - |
| 2.9586 | 26370 | 0.0 | - |
| 2.9597 | 26380 | 0.0001 | - |
| 2.9608 | 26390 | 0.0 | - |
| 2.9620 | 26400 | 0.0001 | - |
| 2.9631 | 26410 | 0.0003 | - |
| 2.9642 | 26420 | 0.0 | - |
| 2.9653 | 26430 | 0.0 | - |
| 2.9665 | 26440 | 0.0 | - |
| 2.9676 | 26450 | 0.0001 | - |
| 2.9687 | 26460 | 0.0001 | - |
| 2.9698 | 26470 | 0.0 | - |
| 2.9709 | 26480 | 0.0 | - |
| 2.9721 | 26490 | 0.0001 | - |
| 2.9732 | 26500 | 0.0001 | - |
| 2.9743 | 26510 | 0.0002 | - |
| 2.9754 | 26520 | 0.0 | - |
| 2.9766 | 26530 | 0.0 | - |
| 2.9777 | 26540 | 0.0001 | - |
| 2.9788 | 26550 | 0.0003 | - |
| 2.9799 | 26560 | 0.0001 | - |
| 2.9810 | 26570 | 0.0 | - |
| 2.9822 | 26580 | 0.0 | - |
| 2.9833 | 26590 | 0.0 | - |
| 2.9844 | 26600 | 0.0001 | - |
| 2.9855 | 26610 | 0.0 | - |
| 2.9866 | 26620 | 0.0001 | - |
| 2.9878 | 26630 | 0.0 | - |
| 2.9889 | 26640 | 0.0001 | - |
| 2.9900 | 26650 | 0.0001 | - |
| 2.9911 | 26660 | 0.0001 | - |
| 2.9923 | 26670 | 0.0 | - |
| 2.9934 | 26680 | 0.0 | - |
| 2.9945 | 26690 | 0.0 | - |
| 2.9956 | 26700 | 0.0001 | - |
| 2.9967 | 26710 | 0.0 | - |
| 2.9979 | 26720 | 0.0 | - |
| 2.9990 | 26730 | 0.0 | - |
| 3.0 | 26739 | - | 0.0346 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- Transformers: 4.37.0
- PyTorch: 2.4.1+cu121
- Datasets: 3.0.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Quelles sont les étapes pour transférer le générateur?"}, {"text": "Pouvez-vous fournir l'historique d'utilisation du générateur?"}, {"text": "L'ordinateur portable est-il libre à l'instant?"}, {"text": "Pouvez-vous fournir une mise à jour sur le statut de sécurité du chariot élévateur?"}, {"text": "What upgrades have been applied to the printer?"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8897058823529411, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,346 |
lmg-anon/vntl-7b-v0.3.1-hf
|
lmg-anon
|
translation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"ja",
"en",
"dataset:lmg-anon/VNTL-v2.5-1k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-01-27T23:44:36Z |
2024-01-28T00:05:21+00:00
| 8 | 4 |
---
datasets:
- lmg-anon/VNTL-v2.5-1k
language:
- ja
- en
license: llama2
pipeline_tag: translation
---
This is a merge of the [experimental VNTL v0.3.1 lora](https://huggingface.co/lmg-anon/vntl-7b-v0.3.1-lora) created using the [VNTL-v2.5-1k](https://huggingface.co/datasets/lmg-anon/VNTL-v2.5-1k) dataset.
This is an prompt example:
```
<<START>>
Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん)
Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female
<<JAPANESE>>
[桜乃]: 『……ごめん』
<<ENGLISH>> (fidelity = absolute)
[Sakuno]: 『... Sorry.』</s>
<<JAPANESE>>
[新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」
<<ENGLISH>> (fidelity = high)
```
The generated translation for that prompt, with temperature 0, is:
```
[Shingo]: 「No, don't apologize. I'm just glad you're safe. You're so cute, Sakuno, I was worried sick.」
```
| null |
Non_BioNLP
|
This is a merge of the [experimental VNTL v0.3.1 lora](https://huggingface.co/lmg-anon/vntl-7b-v0.3.1-lora) created using the [VNTL-v2.5-1k](https://huggingface.co/datasets/lmg-anon/VNTL-v2.5-1k) dataset.
This is an prompt example:
```
<<START>>
Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん)
Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female
<<JAPANESE>>
[桜乃]: 『……ごめん』
<<ENGLISH>> (fidelity = absolute)
[Sakuno]: 『... Sorry.』</s>
<<JAPANESE>>
[新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」
<<ENGLISH>> (fidelity = high)
```
The generated translation for that prompt, with temperature 0, is:
```
[Shingo]: 「No, don't apologize. I'm just glad you're safe. You're so cute, Sakuno, I was worried sick.」
```
|
{"datasets": ["lmg-anon/VNTL-v2.5-1k"], "language": ["ja", "en"], "license": "llama2", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 45,347 |
Andrewwwwww/Nous-Hermes-2-Mixtral-8x7B-DPO
|
Andrewwwwww
|
text-generation
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-01-20T02:55:44Z |
2024-01-20T09:15:08+00:00
| 14 | 1 |
---
base_model: mistralai/Mixtral-8x7B-v0.1
language:
- en
license: apache-2.0
tags:
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-Mixtral-8x7B-DPO
results: []
---
# Nous Hermes 2 - Mixtral 8x7B - DPO

## Model description
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
This is the SFT + DPO version of Mixtral Hermes 2, we have also released an SFT only version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
# Table of Contents
1. [Example Outputs](#example-outputs)
2. [Benchmark Results](#benchmark-results)
- GPT4All
- AGIEval
- BigBench
- Comparison to Mixtral-Instruct
3. [Prompt Format](#prompt-format)
4. [Inference Example Code](#inference-code)
5. [Quantized Models](#quantized-models)
## Example Outputs
### Writing Code for Data Visualization

### Writing Cyberpunk Psychedelic Poems

### Performing Backtranslation to Create Prompts from Input Text

## Benchmark Results
Nous-Hermes 2 on Mixtral 8x7B is a major improvement across the board on the benchmarks below compared to the base Mixtral model, and is the first model to beat the flagship Mixtral Finetune by MistralAI.
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5990|± |0.0143|
| | |acc_norm|0.6425|± |0.0140|
|arc_easy | 0|acc |0.8657|± |0.0070|
| | |acc_norm|0.8636|± |0.0070|
|boolq | 1|acc |0.8783|± |0.0057|
|hellaswag | 0|acc |0.6661|± |0.0047|
| | |acc_norm|0.8489|± |0.0036|
|openbookqa | 0|acc |0.3440|± |0.0213|
| | |acc_norm|0.4660|± |0.0223|
|piqa | 0|acc |0.8324|± |0.0087|
| | |acc_norm|0.8379|± |0.0086|
|winogrande | 0|acc |0.7616|± |0.0120|
```
Average: 75.70
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2402|± |0.0269|
| | |acc_norm|0.2520|± |0.0273|
|agieval_logiqa_en | 0|acc |0.4117|± |0.0193|
| | |acc_norm|0.4055|± |0.0193|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.5549|± |0.0220|
| | |acc_norm|0.5294|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.8010|± |0.0279|
| | |acc_norm|0.7913|± |0.0284|
|agieval_sat_en_without_passage| 0|acc |0.4806|± |0.0349|
| | |acc_norm|0.4612|± |0.0348|
|agieval_sat_math | 0|acc |0.4909|± |0.0338|
| | |acc_norm|0.4000|± |0.0331|
```
Average: 46.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6105|± |0.0355|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7182|± |0.0235|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5736|± |0.0308|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4596|± |0.0263|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3500|± |0.0214|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2500|± |0.0164|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5200|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3540|± |0.0214|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6900|± |0.0103|
|bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2535|± |0.0138|
|bigbench_snarks | 0|multiple_choice_grade|0.7293|± |0.0331|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6744|± |0.0149|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.7400|± |0.0139|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2176|± |0.0117|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1543|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5200|± |0.0289|
```
Average: 49.70
# Benchmark Comparison Charts
## GPT4All

## AGI-Eval

## BigBench Reasoning Test

## Comparison to Mixtral Instruct:
Our benchmarks show gains in many benchmarks against Mixtral Instruct v0.1, on average, beating the flagship Mixtral model.

# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True)
model = MixtralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# Quantized Models:
## All sizes of GGUF Quantizations are available here:
### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
(Note: If you have issues with these GGUF's try TheBloke's)
## TheBloke has also quantized Hermes Mixtral in various forms:
### SFT+DPO GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
### SFT+DPO GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
### SFT GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GPTQ
### SFT+DPO AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ
### SFT AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-AWQ
## There is also an MLX version available:
### https://huggingface.co/mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit
## Exllama2 quants available here:
### https://huggingface.co/qeternity/Nous-Hermes-2-Mixtral-8x7B-SFT-4bpw-h6-exl2
(other sizes available in Qeternity's repos)
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
| null |
Non_BioNLP
|
# Nous Hermes 2 - Mixtral 8x7B - DPO

## Model description
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
This is the SFT + DPO version of Mixtral Hermes 2, we have also released an SFT only version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
# Table of Contents
1. [Example Outputs](#example-outputs)
2. [Benchmark Results](#benchmark-results)
- GPT4All
- AGIEval
- BigBench
- Comparison to Mixtral-Instruct
3. [Prompt Format](#prompt-format)
4. [Inference Example Code](#inference-code)
5. [Quantized Models](#quantized-models)
## Example Outputs
### Writing Code for Data Visualization

### Writing Cyberpunk Psychedelic Poems

### Performing Backtranslation to Create Prompts from Input Text

## Benchmark Results
Nous-Hermes 2 on Mixtral 8x7B is a major improvement across the board on the benchmarks below compared to the base Mixtral model, and is the first model to beat the flagship Mixtral Finetune by MistralAI.
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5990|± |0.0143|
| | |acc_norm|0.6425|± |0.0140|
|arc_easy | 0|acc |0.8657|± |0.0070|
| | |acc_norm|0.8636|± |0.0070|
|boolq | 1|acc |0.8783|± |0.0057|
|hellaswag | 0|acc |0.6661|± |0.0047|
| | |acc_norm|0.8489|± |0.0036|
|openbookqa | 0|acc |0.3440|± |0.0213|
| | |acc_norm|0.4660|± |0.0223|
|piqa | 0|acc |0.8324|± |0.0087|
| | |acc_norm|0.8379|± |0.0086|
|winogrande | 0|acc |0.7616|± |0.0120|
```
Average: 75.70
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2402|± |0.0269|
| | |acc_norm|0.2520|± |0.0273|
|agieval_logiqa_en | 0|acc |0.4117|± |0.0193|
| | |acc_norm|0.4055|± |0.0193|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.5549|± |0.0220|
| | |acc_norm|0.5294|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.8010|± |0.0279|
| | |acc_norm|0.7913|± |0.0284|
|agieval_sat_en_without_passage| 0|acc |0.4806|± |0.0349|
| | |acc_norm|0.4612|± |0.0348|
|agieval_sat_math | 0|acc |0.4909|± |0.0338|
| | |acc_norm|0.4000|± |0.0331|
```
Average: 46.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6105|± |0.0355|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7182|± |0.0235|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5736|± |0.0308|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4596|± |0.0263|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3500|± |0.0214|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2500|± |0.0164|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5200|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3540|± |0.0214|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6900|± |0.0103|
|bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2535|± |0.0138|
|bigbench_snarks | 0|multiple_choice_grade|0.7293|± |0.0331|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6744|± |0.0149|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.7400|± |0.0139|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2176|± |0.0117|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1543|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5200|± |0.0289|
```
Average: 49.70
# Benchmark Comparison Charts
## GPT4All

## AGI-Eval

## BigBench Reasoning Test

## Comparison to Mixtral Instruct:
Our benchmarks show gains in many benchmarks against Mixtral Instruct v0.1, on average, beating the flagship Mixtral model.

# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True)
model = MixtralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# Quantized Models:
## All sizes of GGUF Quantizations are available here:
### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
(Note: If you have issues with these GGUF's try TheBloke's)
## TheBloke has also quantized Hermes Mixtral in various forms:
### SFT+DPO GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
### SFT+DPO GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
### SFT GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GPTQ
### SFT+DPO AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ
### SFT AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-AWQ
## There is also an MLX version available:
### https://huggingface.co/mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit
## Exllama2 quants available here:
### https://huggingface.co/qeternity/Nous-Hermes-2-Mixtral-8x7B-SFT-4bpw-h6-exl2
(other sizes available in Qeternity's repos)
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
{"base_model": "mistralai/Mixtral-8x7B-v0.1", "language": ["en"], "license": "apache-2.0", "tags": ["Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation"], "model-index": [{"name": "Nous-Hermes-2-Mixtral-8x7B-DPO", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,348 |
sarwarbeing/wm-04-contrastive-learning
|
sarwarbeing
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-09-07T11:49:23Z |
2023-09-07T20:52:21+00:00
| 10 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# sarwarbeing/wm-04-contrastive-learning
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sarwarbeing/wm-04-contrastive-learning")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# sarwarbeing/wm-04-contrastive-learning
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sarwarbeing/wm-04-contrastive-learning")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,349 |
rizwan-ai/distilbert-base-uncased-finetuned-emotion
|
rizwan-ai
|
text-classification
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-17T08:04:27Z |
2024-01-17T12:58:11+00:00
| 7 | 3 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9285
name: Accuracy
- type: f1
value: 0.9279479445663346
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2157
- Accuracy: 0.9285
- F1: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8287 | 1.0 | 250 | 0.3069 | 0.913 | 0.9119 |
| 0.2527 | 2.0 | 500 | 0.2157 | 0.9285 | 0.9279 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2157
- Accuracy: 0.9285
- F1: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8287 | 1.0 | 250 | 0.3069 | 0.913 | 0.9119 |
| 0.2527 | 2.0 | 500 | 0.2157 | 0.9285 | 0.9279 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9285, "name": "Accuracy"}, {"type": "f1", "value": 0.9279479445663346, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,350 |
armaniii/bert-base-uncased-augmentation-indomain-bm25-sts
|
armaniii
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:20127",
"loss:CosineSimilarityLoss",
"en",
"dataset:sentence-transformers/stsb",
"arxiv:1908.10084",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-26T20:13:49Z |
2024-11-26T20:14:17+00:00
| 8 | 2 |
---
base_model: google-bert/bert-base-uncased
datasets:
- sentence-transformers/stsb
language:
- en
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:20127
- loss:CosineSimilarityLoss
widget:
- source_sentence: The man talked to a girl over the internet camera.
sentences:
- A group of elderly people pose around a dining table.
- A teenager talks to a girl over a webcam.
- There is no 'still' that is not relative to some other object.
- source_sentence: A woman is writing something.
sentences:
- Two eagles are perched on a branch.
- It refers to the maximum f-stop (which is defined as the ratio of focal length
to effective aperture diameter).
- A woman is chopping green onions.
- source_sentence: The player shoots the winning points.
sentences:
- Minimum wage laws hurt the least skilled, least productive the most.
- The basketball player is about to score points for his team.
- Sheep are grazing in the field in front of a line of trees.
- source_sentence: Stars form in star-formation regions, which itself develop from
molecular clouds.
sentences:
- Although I believe Searle is mistaken, I don't think you have found the problem.
- It may be possible for a solar system like ours to exist outside of a galaxy.
- A blond-haired child performing on the trumpet in front of a house while his younger
brother watches.
- source_sentence: While Queen may refer to both Queen regent (sovereign) or Queen
consort, the King has always been the sovereign.
sentences:
- At first, I thought this is a bit of a tricky question.
- A man sitting on the floor in a room is strumming a guitar.
- There is a very good reason not to refer to the Queen's spouse as "King" - because
they aren't the King.
model-index:
- name: SentenceTransformer based on google-bert/bert-base-uncased
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.8704036241540303
name: Pearson Cosine
- type: spearman_cosine
value: 0.8723063947160014
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8240304398880643
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8326280427400794
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.824332157368767
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8327621115149644
name: Spearman Euclidean
- type: pearson_dot
value: 0.7561120117358238
name: Pearson Dot
- type: spearman_dot
value: 0.7732899193523305
name: Spearman Dot
- type: pearson_max
value: 0.8704036241540303
name: Pearson Max
- type: spearman_max
value: 0.8723063947160014
name: Spearman Max
- type: pearson_cosine
value: 0.8341388917194029
name: Pearson Cosine
- type: spearman_cosine
value: 0.8312253997736475
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8121299512156789
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8102823785744042
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8124379587910084
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8106160221464417
name: Spearman Euclidean
- type: pearson_dot
value: 0.6947485972044003
name: Pearson Dot
- type: spearman_dot
value: 0.6858002756760537
name: Spearman Dot
- type: pearson_max
value: 0.8341388917194029
name: Pearson Max
- type: spearman_max
value: 0.8312253997736475
name: Spearman Max
---
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("armaniii/bert-base-uncased-augmentation-indomain-bm25-sts")
# Run inference
sentences = [
'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.',
'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.',
'A man sitting on the floor in a room is strumming a guitar.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8704 |
| **spearman_cosine** | **0.8723** |
| pearson_manhattan | 0.824 |
| spearman_manhattan | 0.8326 |
| pearson_euclidean | 0.8243 |
| spearman_euclidean | 0.8328 |
| pearson_dot | 0.7561 |
| spearman_dot | 0.7733 |
| pearson_max | 0.8704 |
| spearman_max | 0.8723 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8341 |
| **spearman_cosine** | **0.8312** |
| pearson_manhattan | 0.8121 |
| spearman_manhattan | 0.8103 |
| pearson_euclidean | 0.8124 |
| spearman_euclidean | 0.8106 |
| pearson_dot | 0.6947 |
| spearman_dot | 0.6858 |
| pearson_max | 0.8341 |
| spearman_max | 0.8312 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### sentence-transformers/stsb
* Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
* Size: 20,127 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### sentence-transformers/stsb
* Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------|:------------------------------------------------------|:------------------|
| <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
| <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
| <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:------:|:------------------------:|
| 0.0795 | 100 | 0.0526 | 0.0390 | 0.8215 |
| 0.1590 | 200 | 0.0218 | 0.0335 | 0.8415 |
| 0.2385 | 300 | 0.0186 | 0.0310 | 0.8561 |
| 0.3180 | 400 | 0.0166 | 0.0341 | 0.8479 |
| 0.3975 | 500 | 0.0176 | 0.0313 | 0.8503 |
| 0.4769 | 600 | 0.0155 | 0.0281 | 0.8652 |
| 0.5564 | 700 | 0.0148 | 0.0270 | 0.8656 |
| 0.6359 | 800 | 0.014 | 0.0277 | 0.8669 |
| 0.7154 | 900 | 0.0149 | 0.0286 | 0.8694 |
| 0.7949 | 1000 | 0.0125 | 0.0281 | 0.8724 |
| 0.8744 | 1100 | 0.013 | 0.0285 | 0.8694 |
| 0.9539 | 1200 | 0.0127 | 0.0269 | 0.8723 |
| 1.0 | 1258 | - | - | 0.8312 |
### Framework Versions
- Python: 3.9.2
- Sentence Transformers: 3.0.1
- Transformers: 4.43.1
- PyTorch: 2.3.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.14.7
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("armaniii/bert-base-uncased-augmentation-indomain-bm25-sts")
# Run inference
sentences = [
'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.',
'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.',
'A man sitting on the floor in a room is strumming a guitar.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8704 |
| **spearman_cosine** | **0.8723** |
| pearson_manhattan | 0.824 |
| spearman_manhattan | 0.8326 |
| pearson_euclidean | 0.8243 |
| spearman_euclidean | 0.8328 |
| pearson_dot | 0.7561 |
| spearman_dot | 0.7733 |
| pearson_max | 0.8704 |
| spearman_max | 0.8723 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8341 |
| **spearman_cosine** | **0.8312** |
| pearson_manhattan | 0.8121 |
| spearman_manhattan | 0.8103 |
| pearson_euclidean | 0.8124 |
| spearman_euclidean | 0.8106 |
| pearson_dot | 0.6947 |
| spearman_dot | 0.6858 |
| pearson_max | 0.8341 |
| spearman_max | 0.8312 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### sentence-transformers/stsb
* Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
* Size: 20,127 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### sentence-transformers/stsb
* Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------|:------------------------------------------------------|:------------------|
| <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
| <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
| <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:------:|:------------------------:|
| 0.0795 | 100 | 0.0526 | 0.0390 | 0.8215 |
| 0.1590 | 200 | 0.0218 | 0.0335 | 0.8415 |
| 0.2385 | 300 | 0.0186 | 0.0310 | 0.8561 |
| 0.3180 | 400 | 0.0166 | 0.0341 | 0.8479 |
| 0.3975 | 500 | 0.0176 | 0.0313 | 0.8503 |
| 0.4769 | 600 | 0.0155 | 0.0281 | 0.8652 |
| 0.5564 | 700 | 0.0148 | 0.0270 | 0.8656 |
| 0.6359 | 800 | 0.014 | 0.0277 | 0.8669 |
| 0.7154 | 900 | 0.0149 | 0.0286 | 0.8694 |
| 0.7949 | 1000 | 0.0125 | 0.0281 | 0.8724 |
| 0.8744 | 1100 | 0.013 | 0.0285 | 0.8694 |
| 0.9539 | 1200 | 0.0127 | 0.0269 | 0.8723 |
| 1.0 | 1258 | - | - | 0.8312 |
### Framework Versions
- Python: 3.9.2
- Sentence Transformers: 3.0.1
- Transformers: 4.43.1
- PyTorch: 2.3.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.14.7
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "google-bert/bert-base-uncased", "datasets": ["sentence-transformers/stsb"], "language": ["en"], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:20127", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "The man talked to a girl over the internet camera.", "sentences": ["A group of elderly people pose around a dining table.", "A teenager talks to a girl over a webcam.", "There is no 'still' that is not relative to some other object."]}, {"source_sentence": "A woman is writing something.", "sentences": ["Two eagles are perched on a branch.", "It refers to the maximum f-stop (which is defined as the ratio of focal length to effective aperture diameter).", "A woman is chopping green onions."]}, {"source_sentence": "The player shoots the winning points.", "sentences": ["Minimum wage laws hurt the least skilled, least productive the most.", "The basketball player is about to score points for his team.", "Sheep are grazing in the field in front of a line of trees."]}, {"source_sentence": "Stars form in star-formation regions, which itself develop from molecular clouds.", "sentences": ["Although I believe Searle is mistaken, I don't think you have found the problem.", "It may be possible for a solar system like ours to exist outside of a galaxy.", "A blond-haired child performing on the trumpet in front of a house while his younger brother watches."]}, {"source_sentence": "While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.", "sentences": ["At first, I thought this is a bit of a tricky question.", "A man sitting on the floor in a room is strumming a guitar.", "There is a very good reason not to refer to the Queen's spouse as \"King\" - because they aren't the King."]}], "model-index": [{"name": "SentenceTransformer based on google-bert/bert-base-uncased", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts test", "type": "sts-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.8704036241540303, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8723063947160014, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.8240304398880643, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8326280427400794, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.824332157368767, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8327621115149644, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.7561120117358238, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.7732899193523305, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8704036241540303, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8723063947160014, "name": "Spearman Max"}, {"type": "pearson_cosine", "value": 0.8341388917194029, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8312253997736475, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.8121299512156789, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8102823785744042, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.8124379587910084, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8106160221464417, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.6947485972044003, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.6858002756760537, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8341388917194029, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8312253997736475, "name": "Spearman Max"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 45,352 |
ArunIcfoss/nllb-200-1.3B-ICFOSS-Malayalam_English_Translation1.3b
|
ArunIcfoss
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/nllb-200-1.3B",
"base_model:adapter:facebook/nllb-200-1.3B",
"license:cc-by-nc-4.0",
"region:us"
] | 2024-05-20T06:56:38Z |
2024-05-22T09:32:35+00:00
| 2 | 1 |
---
base_model: facebook/nllb-200-1.3B
library_name: peft
license: cc-by-nc-4.0
metrics:
- bleu
- rouge
tags:
- generated_from_trainer
model-index:
- name: nllb-200-1.3B-ICFOSS-Malayalam_English_Translation1.3b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-200-1.3B-ICFOSS-Malayalam_English_Translation1.3b
This model is a fine-tuned version of [facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0536
- Bleu: 36.7256
- Rouge: {'rouge1': 0.6977825292445439, 'rouge2': 0.47317224666360513, 'rougeL': 0.6369586014923634, 'rougeLsum': 0.6367120144580565}
- Chrf: {'score': 63.88643397225133, 'char_order': 6, 'word_order': 0, 'beta': 2}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Chrf |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:----------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------:|
| 1.1683 | 1.0 | 5750 | 1.0774 | 35.9761 | {'rouge1': 0.6937855960659589, 'rouge2': 0.466938063654629, 'rougeL': 0.6325990208208303, 'rougeLsum': 0.6323899971616622} | {'score': 63.363704282940446, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.1177 | 2.0 | 11500 | 1.0617 | 36.3486 | {'rouge1': 0.6957984629345982, 'rouge2': 0.47067647725021045, 'rougeL': 0.6351678391451753, 'rougeLsum': 0.6350175761315434} | {'score': 63.657728669261445, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.102 | 3.0 | 17250 | 1.0559 | 36.7216 | {'rouge1': 0.6970801919668868, 'rouge2': 0.47279660574601357, 'rougeL': 0.6364385448189633, 'rougeLsum': 0.6362592345657716} | {'score': 63.89202343434442, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.0967 | 4.0 | 23000 | 1.0545 | 36.7450 | {'rouge1': 0.6977900451765099, 'rouge2': 0.4734910607221403, 'rougeL': 0.6373405033951935, 'rougeLsum': 0.6371420919202282} | {'score': 63.918132836888965, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.0935 | 5.0 | 28750 | 1.0538 | 36.7038 | {'rouge1': 0.6978511315129863, 'rouge2': 0.4733012047244315, 'rougeL': 0.6371351829239855, 'rougeLsum': 0.6369801889854168} | {'score': 63.87115369473548, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.0928 | 6.0 | 34500 | 1.0536 | 36.7485 | {'rouge1': 0.6977169592049554, 'rouge2': 0.4734304167965041, 'rougeL': 0.636966108177003, 'rougeLsum': 0.6367749449397957} | {'score': 63.894445637643784, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.0918 | 7.0 | 40250 | 1.0536 | 36.7256 | {'rouge1': 0.6977825292445439, 'rouge2': 0.47317224666360513, 'rougeL': 0.6369586014923634, 'rougeLsum': 0.6367120144580565} | {'score': 63.88643397225133, 'char_order': 6, 'word_order': 0, 'beta': 2} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-200-1.3B-ICFOSS-Malayalam_English_Translation1.3b
This model is a fine-tuned version of [facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0536
- Bleu: 36.7256
- Rouge: {'rouge1': 0.6977825292445439, 'rouge2': 0.47317224666360513, 'rougeL': 0.6369586014923634, 'rougeLsum': 0.6367120144580565}
- Chrf: {'score': 63.88643397225133, 'char_order': 6, 'word_order': 0, 'beta': 2}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Chrf |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:----------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------:|
| 1.1683 | 1.0 | 5750 | 1.0774 | 35.9761 | {'rouge1': 0.6937855960659589, 'rouge2': 0.466938063654629, 'rougeL': 0.6325990208208303, 'rougeLsum': 0.6323899971616622} | {'score': 63.363704282940446, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.1177 | 2.0 | 11500 | 1.0617 | 36.3486 | {'rouge1': 0.6957984629345982, 'rouge2': 0.47067647725021045, 'rougeL': 0.6351678391451753, 'rougeLsum': 0.6350175761315434} | {'score': 63.657728669261445, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.102 | 3.0 | 17250 | 1.0559 | 36.7216 | {'rouge1': 0.6970801919668868, 'rouge2': 0.47279660574601357, 'rougeL': 0.6364385448189633, 'rougeLsum': 0.6362592345657716} | {'score': 63.89202343434442, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.0967 | 4.0 | 23000 | 1.0545 | 36.7450 | {'rouge1': 0.6977900451765099, 'rouge2': 0.4734910607221403, 'rougeL': 0.6373405033951935, 'rougeLsum': 0.6371420919202282} | {'score': 63.918132836888965, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.0935 | 5.0 | 28750 | 1.0538 | 36.7038 | {'rouge1': 0.6978511315129863, 'rouge2': 0.4733012047244315, 'rougeL': 0.6371351829239855, 'rougeLsum': 0.6369801889854168} | {'score': 63.87115369473548, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.0928 | 6.0 | 34500 | 1.0536 | 36.7485 | {'rouge1': 0.6977169592049554, 'rouge2': 0.4734304167965041, 'rougeL': 0.636966108177003, 'rougeLsum': 0.6367749449397957} | {'score': 63.894445637643784, 'char_order': 6, 'word_order': 0, 'beta': 2} |
| 1.0918 | 7.0 | 40250 | 1.0536 | 36.7256 | {'rouge1': 0.6977825292445439, 'rouge2': 0.47317224666360513, 'rougeL': 0.6369586014923634, 'rougeLsum': 0.6367120144580565} | {'score': 63.88643397225133, 'char_order': 6, 'word_order': 0, 'beta': 2} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"base_model": "facebook/nllb-200-1.3B", "library_name": "peft", "license": "cc-by-nc-4.0", "metrics": ["bleu", "rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "nllb-200-1.3B-ICFOSS-Malayalam_English_Translation1.3b", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,353 |
Suechun/test-translation-ja2en
|
Suechun
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-18T16:53:37Z |
2023-07-19T04:45:42+00:00
| 25 | 0 |
---
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: test-translation-ja2en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-translation-ja2en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0557
- Bleu: 18.4999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.3.2
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-translation-ja2en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0557
- Bleu: 18.4999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.3.2
- Tokenizers 0.13.3
|
{"license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "test-translation-ja2en", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,354 |
HAriGa/my_awesome_model
|
HAriGa
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:gnad10",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-26T23:50:52Z |
2023-01-27T02:28:29+00:00
| 128 | 0 |
---
datasets:
- gnad10
license: mit
metrics:
- f1
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: gnad10
type: gnad10
config: default
split: test
args: default
metrics:
- type: f1
value: 0.9001230044993416
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the gnad10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3414
- F1: 0.9001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5884 | 1.0 | 578 | 0.3510 | 0.8940 |
| 0.2389 | 2.0 | 1156 | 0.3414 | 0.9001 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.0.dev20230126+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the gnad10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3414
- F1: 0.9001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5884 | 1.0 | 578 | 0.3510 | 0.8940 |
| 0.2389 | 2.0 | 1156 | 0.3414 | 0.9001 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.0.dev20230126+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
{"datasets": ["gnad10"], "license": "mit", "metrics": ["f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "my_awesome_model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "gnad10", "type": "gnad10", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "f1", "value": 0.9001230044993416, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,355 |
hongyin/chat-awareness-0.8b
|
hongyin
|
text-generation
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"zh",
"arxiv:2302.13173",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-07-07T12:00:00Z |
2023-09-20T09:24:23+00:00
| 38 | 1 |
---
language:
- en
- zh
pipeline_tag: text-generation
---
## hongyin/chat-awareness-0.8b
This is a lightweight English-Chinese bilingual chat assistant with a parameter size of 0.8b, dedicated to reducing the computational cost of inference. The training process consists of two parts: (1) NTP task. (2) Instruction tuning.
```python
### Human: Paraphrasing the text I love you.
### Assistant: I have a crash on you.
```
## Bibtex entry and citation info
Please cite if you find it helpful.
```
@article{zhu2023metaaid,
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
author={Zhu, Hongyin},
journal={arXiv preprint arXiv:2302.13173},
year={2023}
}
```
---
license: other
---
| null |
TBD
|
## hongyin/chat-awareness-0.8b
This is a lightweight English-Chinese bilingual chat assistant with a parameter size of 0.8b, dedicated to reducing the computational cost of inference. The training process consists of two parts: (1) NTP task. (2) Instruction tuning.
```python
### Human: Paraphrasing the text I love you.
### Assistant: I have a crash on you.
```
## Bibtex entry and citation info
Please cite if you find it helpful.
```
@article{zhu2023metaaid,
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
author={Zhu, Hongyin},
journal={arXiv preprint arXiv:2302.13173},
year={2023}
}
```
---
license: other
---
|
{"language": ["en", "zh"], "pipeline_tag": "text-generation"}
|
task
|
[
"PARAPHRASING"
] | 45,356 |
basic-go/FRED-T5-large-habr-summarizer
|
basic-go
|
summarization
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"ru",
"base_model:ai-forever/FRED-T5-large",
"base_model:finetune:ai-forever/FRED-T5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-12-17T20:40:02Z |
2023-12-18T10:39:58+00:00
| 650 | 9 |
---
base_model: ai-forever/FRED-T5-large
language:
- ru
license: apache-2.0
pipeline_tag: summarization
---
Это абстрактивный суммаризатор для статей с Habr, обученный на данных тестового задания в рамках [GPT Week](https://shad.yandex.ru/gptweek).
В качестве базовой модели выступает [ai-forever/FRED-T5-large](https://huggingface.co/ai-forever/FRED-T5-large).
## Использование
[<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>](https://colab.research.google.com/drive/1bREOX9RFwRZH0A3b2UsWwUBdEvBczteW?usp=sharing)
Использование подразумевает пред- и постобработку текста, как в этом примере:
```python
import re
from transformers import pipeline
clean_expr = re.compile(r"[\xa0\x1a\x16\x1b\x17\x15\u2004]")
spaces_expr = re.compile(r"\s{2,}")
def process_text(text: str) -> str:
"""Осуществляет пред- и постобработку текста."""
text = clean_expr.sub(" ", text)
text = spaces_expr.sub(" ", text)
if "." in text:
index = text.rindex(".")
text = text[:index + 1]
return text
summarizer = pipeline("summarization", model="basic-go/FRED-T5-large-habr-summarizer", device=0)
ARTICLE = """
Трогательный инженерный бизиборд, Новый Год и волонтеры / Хабр
31-12-2019
Зачем нужен бизиборд, когда у папы много интересных инструментов? Опасные убраны, а пассатижами пусть ребенок играет, шуруповерт нажимает, ручки осциллографа крутит. Но наступил момент и сделан инженерный бизиборд. Таким каким он должен быть, в противоположность фабричному изделию – красивому, глянцевому, красочному, со всеми закругленными углами и краями. Разноцветному для глаз, но однородному и скучному на ощупь. С чего все началось Началось с объявления чуть больше года назад в социальной сети, что московскому психоневралогическому интернату (ПНИ) требуются бизиборды: Что это такое Что такое ПНИ трудно представить, но можно посмотреть в youtube. И если волонтеры пишут, что в ПНИ проблема дефицита сенсорных впечатлений, значит это ОЧЕНЬ большая проблема. Мы, взрослые, ходим по строительному гипермаркету и крутим в руках железки, щупаем трубки, проводим рукой по оплетке и т.д. Женщины в магазине трогают вообще всё. Я инженер, регулярно покупаю какие-то детали, чтобы на столе лежали, и я достаточное количество раз их потрогал и постепенно понял – смогу ли применить как задумал. Каждый в детстве лежал на кровати и бесконечно долго разглядывал и щупал ковер. Или, провинившись, стоял в углу и ковырял обои. Щупать – это способность и потребность человека. Когда мы с товарищем увидели бизиборды, которые планируется купить для ПНИ, решили сделать максимально трогательный бизиборд сами. Трогательный в прямом смысле. Куплены парные раскручивающиеся фитинги из пластика, никелированной стали, бронзы, чугуна оцинкованного и черного чугуна. Медные трубки, гофрированная трубка из нержавеющей стали, по которой со звоном долго спускается шайба, если ее поднять и отпустить. Самый красивый материал, конечно, никелированная сталь. Но красота не главное, главное контраст. Поэтому рядом с никелированной деталью висит деталь из черного чугуна, и трудно сказать – какой материал даст больший чувственный опыт. «Чугунные игрушки» из анекдота – не насмешка, а необходимость; ребенку важно почувствовать тяжесть и грубую фактуру материала. Все фитинги по-разному закручиваются, интересней других крутить сложные составные фитинги для труб ПНД. Гофра для унитаза удивляет диаметром, и ее можно растягивать. Пеньковая веревка впечатляюще колюча. Отрезной диск красивый, брутальный, и на ощупь грубый. Образцы керамики вдали выглядят обычно, но красивые, если рассматривать их многослойное покрытие вблизи, проводя по рельефу пальцами. Как это сделано За основу взят лист фанеры 1500×1000×10 мм. В строительном гипермаркете есть услуга пила в размер, к основному листу дополнительно нарезали ребер жесткости. С размером листа ошибся, лист чуть-чуть не влез в автомобиль, а гипермаркет уже закрылся и пришлось в ночь идти с этим листом домой пешком – не надо так. Ребра жесткости прикручены шурупами 16 мм – стянут оба листа, но не выйдут с другой стороны. Всюду, где можно, использован детский труд. Детский труд экономит деньги – не нужно покупать очередной конструктор. Вон папе купили новое кресло, иди распаковывай и собирай, нужна будет помощь — позовешь. И детский труд – это педагогично, ребенок вырастет умеющим не только работать руками, но и делегировать задачи. Дети учатся не тому, чему их учат взрослые, а тому что взрослые при детях делают. Покрасили на лестничной площадке, постелив полиэтилен. Закрепили детали тросами в ПВХ-оплетке. Стальные тросы в ПВХ-оплетке — прекрасный материал, только пришлось отдельно ехать в магазин за кусачками, которые этот трос смогут перекусить. На обратной стороне тросы затянули узлами, а узлы залили термоклеем. Термоклей имеет хорошую адгезию к ПВХ, залитый узел сам не распустится и выдернуть его с лицевой стороны стенда невозможно. Чем все закончилось В «наш» ПНИ уже успели закупить готовые бизиборды, меня переправили к волонтерам другого ПНИ. Там фотографии посмотрели и сказали – а давайте отправим ваш стенд в Азовский интернат для умственно отсталых детей. Сказали – и отправили. В новогоднюю ночь приехала еще одна бригада волонтеров, погрузили стенд в огромный прицеп к АЗЛК 2141 уже забитый коробками с подарками интернату, и в метель уехали. Очень романтично. Дальше я долго пытался получить какую-то обратную связь от волонтеров, но смог лишь узнать телефон интерната. Там звонку удивились, обрадовались, сказали, что не поняли, что это им привезли и отдали в уголок труда. И сказали, что примут любую помощь, и чтобы я приходил и помогал. Я посмотрел фотографии интерната, посмотрел на детей и увидел, что стенд отправили совершенно не туда. Ситуация в интернате по сравнению с ПНИ роскошная, проблемы запертости в своем теле и на своей кровати у детей нет. Жил бы рядом с интернатом – приходил бы по субботам, учил бы этих детей программировать Ардуино. Опыт преподавания робототехники в школе есть, справился бы и в интернате. Но между нами 1100 км. А стенд все же нужен был в ПНИ. Выводы Все сделано правильно, только нужно самому говорить с тем, для кого что-то делаешь. Самому изучать потребность, самому получать обратную связь и самому делать следующий шаг. Стенд делается относительно легко, технология «фанера + тросы в пвх + термоклей» рабочая, предлагаю использовать наш опыт.
"""
ARTICLE = process_text(ARTICLE)
response = summarizer(ARTICLE, max_new_tokens=360, num_beams=2, do_sample=True, top_k=100,
repetition_penalty=2.5, length_penalty=1.0)
summary = process_text(response[0]["summary_text"])
print(summary)
# Трогательный инженерный бизиборд, сделанный для московского психоневралогического интерната, был сделан самим автором. Он использовал парные раскручивающиеся фитинги из пластика, никелированной стали, бронзы, чугуна оцинкованного и черного чугуна, а также детские труд и инструменты. В новогоднюю ночь стенд был отправлен в Азовский интернат для умственно отсталых детей. Автор выражает благодарность волонтерам, которые помогли сделать этот стенд, и предлагает использовать их опыт для улучшения ситуации в интернате.
```
## История подходов
Метрики на тестовом датасете в зависимости от подхода представлены в таблице ниже.
| \# | extra | SFT | DPO | post-processing | Метрика |
|-|-|-|-|-|-|
| 1| | ✓ | | | 0.5168 |
| 2| | ✓ | | ✓ | 0.5525 |
| 3| | ✓ | ✓| ✓ | 0.5664 |
| 4| ✓ | ✓ | | ✓ | 0.6075 |
На начальном этапе у модели выявлены следующие проблемы, негативно влияющие на метрику:
- *Генерация байт-последовательностей*.
Проблема решена пред- и постобработкой.
- *Модель часто не завершала последнее предложение.*
Решено через постобработку отрезанием незавершённой части. Для итоговой модели постобработка даёт небольшой вклад в метрику.
- *Повторение и перефразирование фрагментом текста.*
В итоговой модели проблема значительно нивелирована, однако повторы могут встречаться.
### 1. SFT
- Делаем срез из обучающего набора данных по правилу `metric > 0.65`.
- Разбиваем полученный срез на обучающую и валидационную часть в соотношении 4 : 1.
- Обучаемся 4 эпохи с уменьшающимся шагом, начиная с 1e-4.
### 2. SFT + post-processing
- Применяем функцию `process_text` из примера к текстовому ответу модели.
### 3. SFT + DPO + post-processing
- Инференс на обучающем срезе SFT-моделью.
- Одна эпоха DPO на полученных кандидатах с шагом 1e-7.
### 4. Extra + SFT + post-processing
Extra означает обучаться больше на большем датасете.
- Инференс полученной на предыдущем шаге моделью на половине *исходного* обучающего датасета.
- Объединение с предыдущими генерациями и отбор лучшего кандидата для каждой статьи.
- Делаем новый срез кандидатов по правилу `metric > 0.65`, получаем увеличение набора данных на 30%.
- Обучаемся с нуля 8 эпох с уменьшающимся шагом, начиная с 1e-4.
## Что не получилось
Не удалось завести конфигурацию `extra + SFT + DPO [+ post-processing]`: нестабильное обучение, с некоторого момента модель начинала деградировать.
## Возможные улучшения
В четвёртом подходе (extra and etc.) получилось использовать примерно 20% от всего исходного обучающего датасета. Меняя только параметры генерации и запуская снова инференс, можно добиться большего числа кандидатов, проходящих порог `metric > 0.65`. Предполагается, что обучение модели на датасете с большим числом качественных кандидатов (за счёт выросшего разнообразия) должно приводить к увеличению целевой метрики.
| null |
Non_BioNLP
|
Это абстрактивный суммаризатор для статей с Habr, обученный на данных тестового задания в рамках [GPT Week](https://shad.yandex.ru/gptweek).
В качестве базовой модели выступает [ai-forever/FRED-T5-large](https://huggingface.co/ai-forever/FRED-T5-large).
## Использование
[<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>](https://colab.research.google.com/drive/1bREOX9RFwRZH0A3b2UsWwUBdEvBczteW?usp=sharing)
Использование подразумевает пред- и постобработку текста, как в этом примере:
```python
import re
from transformers import pipeline
clean_expr = re.compile(r"[\xa0\x1a\x16\x1b\x17\x15\u2004]")
spaces_expr = re.compile(r"\s{2,}")
def process_text(text: str) -> str:
"""Осуществляет пред- и постобработку текста."""
text = clean_expr.sub(" ", text)
text = spaces_expr.sub(" ", text)
if "." in text:
index = text.rindex(".")
text = text[:index + 1]
return text
summarizer = pipeline("summarization", model="basic-go/FRED-T5-large-habr-summarizer", device=0)
ARTICLE = """
Трогательный инженерный бизиборд, Новый Год и волонтеры / Хабр
31-12-2019
Зачем нужен бизиборд, когда у папы много интересных инструментов? Опасные убраны, а пассатижами пусть ребенок играет, шуруповерт нажимает, ручки осциллографа крутит. Но наступил момент и сделан инженерный бизиборд. Таким каким он должен быть, в противоположность фабричному изделию – красивому, глянцевому, красочному, со всеми закругленными углами и краями. Разноцветному для глаз, но однородному и скучному на ощупь. С чего все началось Началось с объявления чуть больше года назад в социальной сети, что московскому психоневралогическому интернату (ПНИ) требуются бизиборды: Что это такое Что такое ПНИ трудно представить, но можно посмотреть в youtube. И если волонтеры пишут, что в ПНИ проблема дефицита сенсорных впечатлений, значит это ОЧЕНЬ большая проблема. Мы, взрослые, ходим по строительному гипермаркету и крутим в руках железки, щупаем трубки, проводим рукой по оплетке и т.д. Женщины в магазине трогают вообще всё. Я инженер, регулярно покупаю какие-то детали, чтобы на столе лежали, и я достаточное количество раз их потрогал и постепенно понял – смогу ли применить как задумал. Каждый в детстве лежал на кровати и бесконечно долго разглядывал и щупал ковер. Или, провинившись, стоял в углу и ковырял обои. Щупать – это способность и потребность человека. Когда мы с товарищем увидели бизиборды, которые планируется купить для ПНИ, решили сделать максимально трогательный бизиборд сами. Трогательный в прямом смысле. Куплены парные раскручивающиеся фитинги из пластика, никелированной стали, бронзы, чугуна оцинкованного и черного чугуна. Медные трубки, гофрированная трубка из нержавеющей стали, по которой со звоном долго спускается шайба, если ее поднять и отпустить. Самый красивый материал, конечно, никелированная сталь. Но красота не главное, главное контраст. Поэтому рядом с никелированной деталью висит деталь из черного чугуна, и трудно сказать – какой материал даст больший чувственный опыт. «Чугунные игрушки» из анекдота – не насмешка, а необходимость; ребенку важно почувствовать тяжесть и грубую фактуру материала. Все фитинги по-разному закручиваются, интересней других крутить сложные составные фитинги для труб ПНД. Гофра для унитаза удивляет диаметром, и ее можно растягивать. Пеньковая веревка впечатляюще колюча. Отрезной диск красивый, брутальный, и на ощупь грубый. Образцы керамики вдали выглядят обычно, но красивые, если рассматривать их многослойное покрытие вблизи, проводя по рельефу пальцами. Как это сделано За основу взят лист фанеры 1500×1000×10 мм. В строительном гипермаркете есть услуга пила в размер, к основному листу дополнительно нарезали ребер жесткости. С размером листа ошибся, лист чуть-чуть не влез в автомобиль, а гипермаркет уже закрылся и пришлось в ночь идти с этим листом домой пешком – не надо так. Ребра жесткости прикручены шурупами 16 мм – стянут оба листа, но не выйдут с другой стороны. Всюду, где можно, использован детский труд. Детский труд экономит деньги – не нужно покупать очередной конструктор. Вон папе купили новое кресло, иди распаковывай и собирай, нужна будет помощь — позовешь. И детский труд – это педагогично, ребенок вырастет умеющим не только работать руками, но и делегировать задачи. Дети учатся не тому, чему их учат взрослые, а тому что взрослые при детях делают. Покрасили на лестничной площадке, постелив полиэтилен. Закрепили детали тросами в ПВХ-оплетке. Стальные тросы в ПВХ-оплетке — прекрасный материал, только пришлось отдельно ехать в магазин за кусачками, которые этот трос смогут перекусить. На обратной стороне тросы затянули узлами, а узлы залили термоклеем. Термоклей имеет хорошую адгезию к ПВХ, залитый узел сам не распустится и выдернуть его с лицевой стороны стенда невозможно. Чем все закончилось В «наш» ПНИ уже успели закупить готовые бизиборды, меня переправили к волонтерам другого ПНИ. Там фотографии посмотрели и сказали – а давайте отправим ваш стенд в Азовский интернат для умственно отсталых детей. Сказали – и отправили. В новогоднюю ночь приехала еще одна бригада волонтеров, погрузили стенд в огромный прицеп к АЗЛК 2141 уже забитый коробками с подарками интернату, и в метель уехали. Очень романтично. Дальше я долго пытался получить какую-то обратную связь от волонтеров, но смог лишь узнать телефон интерната. Там звонку удивились, обрадовались, сказали, что не поняли, что это им привезли и отдали в уголок труда. И сказали, что примут любую помощь, и чтобы я приходил и помогал. Я посмотрел фотографии интерната, посмотрел на детей и увидел, что стенд отправили совершенно не туда. Ситуация в интернате по сравнению с ПНИ роскошная, проблемы запертости в своем теле и на своей кровати у детей нет. Жил бы рядом с интернатом – приходил бы по субботам, учил бы этих детей программировать Ардуино. Опыт преподавания робототехники в школе есть, справился бы и в интернате. Но между нами 1100 км. А стенд все же нужен был в ПНИ. Выводы Все сделано правильно, только нужно самому говорить с тем, для кого что-то делаешь. Самому изучать потребность, самому получать обратную связь и самому делать следующий шаг. Стенд делается относительно легко, технология «фанера + тросы в пвх + термоклей» рабочая, предлагаю использовать наш опыт.
"""
ARTICLE = process_text(ARTICLE)
response = summarizer(ARTICLE, max_new_tokens=360, num_beams=2, do_sample=True, top_k=100,
repetition_penalty=2.5, length_penalty=1.0)
summary = process_text(response[0]["summary_text"])
print(summary)
# Трогательный инженерный бизиборд, сделанный для московского психоневралогического интерната, был сделан самим автором. Он использовал парные раскручивающиеся фитинги из пластика, никелированной стали, бронзы, чугуна оцинкованного и черного чугуна, а также детские труд и инструменты. В новогоднюю ночь стенд был отправлен в Азовский интернат для умственно отсталых детей. Автор выражает благодарность волонтерам, которые помогли сделать этот стенд, и предлагает использовать их опыт для улучшения ситуации в интернате.
```
## История подходов
Метрики на тестовом датасете в зависимости от подхода представлены в таблице ниже.
| \# | extra | SFT | DPO | post-processing | Метрика |
|-|-|-|-|-|-|
| 1| | ✓ | | | 0.5168 |
| 2| | ✓ | | ✓ | 0.5525 |
| 3| | ✓ | ✓| ✓ | 0.5664 |
| 4| ✓ | ✓ | | ✓ | 0.6075 |
На начальном этапе у модели выявлены следующие проблемы, негативно влияющие на метрику:
- *Генерация байт-последовательностей*.
Проблема решена пред- и постобработкой.
- *Модель часто не завершала последнее предложение.*
Решено через постобработку отрезанием незавершённой части. Для итоговой модели постобработка даёт небольшой вклад в метрику.
- *Повторение и перефразирование фрагментом текста.*
В итоговой модели проблема значительно нивелирована, однако повторы могут встречаться.
### 1. SFT
- Делаем срез из обучающего набора данных по правилу `metric > 0.65`.
- Разбиваем полученный срез на обучающую и валидационную часть в соотношении 4 : 1.
- Обучаемся 4 эпохи с уменьшающимся шагом, начиная с 1e-4.
### 2. SFT + post-processing
- Применяем функцию `process_text` из примера к текстовому ответу модели.
### 3. SFT + DPO + post-processing
- Инференс на обучающем срезе SFT-моделью.
- Одна эпоха DPO на полученных кандидатах с шагом 1e-7.
### 4. Extra + SFT + post-processing
Extra означает обучаться больше на большем датасете.
- Инференс полученной на предыдущем шаге моделью на половине *исходного* обучающего датасета.
- Объединение с предыдущими генерациями и отбор лучшего кандидата для каждой статьи.
- Делаем новый срез кандидатов по правилу `metric > 0.65`, получаем увеличение набора данных на 30%.
- Обучаемся с нуля 8 эпох с уменьшающимся шагом, начиная с 1e-4.
## Что не получилось
Не удалось завести конфигурацию `extra + SFT + DPO [+ post-processing]`: нестабильное обучение, с некоторого момента модель начинала деградировать.
## Возможные улучшения
В четвёртом подходе (extra and etc.) получилось использовать примерно 20% от всего исходного обучающего датасета. Меняя только параметры генерации и запуская снова инференс, можно добиться большего числа кандидатов, проходящих порог `metric > 0.65`. Предполагается, что обучение модели на датасете с большим числом качественных кандидатов (за счёт выросшего разнообразия) должно приводить к увеличению целевой метрики.
|
{"base_model": "ai-forever/FRED-T5-large", "language": ["ru"], "license": "apache-2.0", "pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 45,357 |
GEETHANAYAGI/model_trans_indictrans2
|
GEETHANAYAGI
|
translation
|
[
"transformers",
"machine-translation",
"indic-languages",
"english-to-hindi",
"text-translation",
"translation",
"dataset:Helsinki-NLP/opus-100",
"dataset:satpalsr/indicCorpv2",
"license:unknown",
"endpoints_compatible",
"region:us"
] | 2024-12-07T16:18:41Z |
2024-12-07T16:51:23+00:00
| 0 | 0 |
---
datasets:
- Helsinki-NLP/opus-100
- satpalsr/indicCorpv2
library_name: transformers
license: unknown
metrics:
- bleu
- rouge
- ter
pipeline_tag: translation
tags:
- machine-translation
- indic-languages
- english-to-hindi
- text-translation
---
| null |
Non_BioNLP
|
{"datasets": ["Helsinki-NLP/opus-100", "satpalsr/indicCorpv2"], "library_name": "transformers", "license": "unknown", "metrics": ["bleu", "rouge", "ter"], "pipeline_tag": "translation", "tags": ["machine-translation", "indic-languages", "english-to-hindi", "text-translation"]}
|
task
|
[
"TRANSLATION"
] | 45,358 |
|
RichardErkhov/matsant01_-_STEMerald-2b-gguf
|
RichardErkhov
| null |
[
"gguf",
"arxiv:2403.08295",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-10-03T17:39:42Z |
2024-10-03T22:00:25+00:00
| 592 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
STEMerald-2b - GGUF
- Model creator: https://huggingface.co/matsant01/
- Original model: https://huggingface.co/matsant01/STEMerald-2b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [STEMerald-2b.Q2_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q2_K.gguf) | Q2_K | 1.08GB |
| [STEMerald-2b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [STEMerald-2b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [STEMerald-2b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [STEMerald-2b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [STEMerald-2b.Q3_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q3_K.gguf) | Q3_K | 1.29GB |
| [STEMerald-2b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [STEMerald-2b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [STEMerald-2b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [STEMerald-2b.Q4_0.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_0.gguf) | Q4_0 | 1.44GB |
| [STEMerald-2b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [STEMerald-2b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [STEMerald-2b.Q4_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_K.gguf) | Q4_K | 1.52GB |
| [STEMerald-2b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [STEMerald-2b.Q4_1.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_1.gguf) | Q4_1 | 1.56GB |
| [STEMerald-2b.Q5_0.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_0.gguf) | Q5_0 | 1.68GB |
| [STEMerald-2b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [STEMerald-2b.Q5_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_K.gguf) | Q5_K | 1.71GB |
| [STEMerald-2b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [STEMerald-2b.Q5_1.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_1.gguf) | Q5_1 | 1.79GB |
| [STEMerald-2b.Q6_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q6_K.gguf) | Q6_K | 1.92GB |
| [STEMerald-2b.Q8_0.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q8_0.gguf) | Q8_0 | 2.49GB |
Original model description:
---
language: en
license: apache-2.0
datasets:
- derek-thomas/ScienceQA
- allenai/ai2_arc
tags:
- education
- stem
- computer science
- data science
- engineering
- biology
- chemistry
---
# STEMerald-2b
**Model name:** STEMerald-2b
**Model description:**
STEMerald-2b is a fine-tuned version of the Gemma-2b model, designed specifically for answering university-level STEM multiple-choice questions. This model leverages advanced fine-tuning techniques, including Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO), to enhance its accuracy and reliability in providing educational support.
<p align="center">
<img src="STEMerald_pic.jpeg" alt="STEMerald picture" width="400"/>
</p>
## Model Details
**Base Model:** [Gemma-2b](https://arxiv.org/abs/2403.08295)
**Architecture:** Decoder-only Language Model (Causal)
**Parameters:** 2.51 billion
**Quantized Version:** STEMerald-2b-4bit (with 4-bit NormalFloat)
**Training Framework:** PyTorch with Hugging Face Transformers
## Datasets
The model was fine-tuned on a variety of datasets tailored for STEM education, including:
- **EPFL Preference Pairs Dataset:** 1522 university-level STEM questions with 26k preference pairs, annotated by students using ChatGPT-3.5 with Chain-of-Thought (CoT).
- **Stack Exchange Dataset:** Questions and answers from various topics such as math, computer science, and engineering.
- **Orca-Math:** 200k grade-school math word problems to enhance reasoning capabilities.
- **EPFL MCQA Dataset**: Dataset of multiple-choice questions with explanation (for CoT) extracted from the winning pairs of EPFL preference pairs.
- **ScienceQA:** Multiple-choice questions on biology, physics, chemistry, economics, earth science, and engineering practices.
- **AI2 Reasoning Challenge (ARC):** Grade-school level multiple-choice science questions.
## Training Process
The training process for STEMerald-2b involved multiple steps:
1. **Supervised Fine-Tuning (SFT):** Initial training on datasets like Orca-Math to improve reasoning abilities.
2. **Direct Preference Optimization (DPO):** Training on preference pairs from EPFL and Stack Exchange datasets to align model outputs with preferred answers.
3. **MCQA Fine-Tuning:** Specialization for multiple-choice question answering using datasets like ScienceQA and ARC.
## Performance
The performance of STEMerald-2b was evaluated using various metrics:
- **Accuracy:** The model achieved high accuracy across multiple test sets, demonstrating its effectiveness in answering STEM questions.
- **Qualitative Evaluation:** The model's answers were evaluated for logical consistency, truthfulness, clarity, and coherence with the final answer.
### Results
| Model Version | Accuracy (Non-Quantized) | Accuracy (Quantized) |
|-----------------------------------|--------------------------|----------------------|
| it-ORCA-DPO-MCQA _(STEMerald-2b)_ | 0.750 | 0.720 |
| it-DPO-MCQA | 0.744 | 0.720 |
| it-MCQA | 0.736 | 0.700 |
| it-ORCA-MCQA | 0.722 | 0.714 |
| MCQA | 0.702 | 0.654 |
| DPO-MCQA | 0.694 | 0.674 |
| Gemma-it-OneShot | 0.546 | 0.520 |
| Gemma-it | 0.518 | 0.518 |
Micro-averaged accuracy over three MCQA test sets(EPFL MCQA, ScienceQA and ARC).
## Use Cases
STEMerald-2b can be utilized as a STEM course assistant, providing support in areas such as:
- Answering university-level multiple-choice STEM questions.
- Offering detailed explanations and reasoning for answers.
- Enhancing student engagement and learning efficiency during independent studies.
## Ethical Considerations
While STEMerald-2b aims to provide accurate and helpful responses, it is important to consider potential ethical implications:
- **Over-Reliance:** Students might become overly dependent on the model for answers, potentially affecting their independent learning and problem-solving skills.
- **Accuracy:** Although efforts were made to ensure the truthfulness of responses, there is still a possibility of incorrect answers. Teacher supervision is crucial.
## Limitations
- The model's performance may vary based on the specific context and nature of the questions.
- Quantization reduces memory footprint but may slightly affect accuracy.
## Conclusion
STEMerald-2b offers a promising solution for enhancing STEM education through advanced language model capabilities. By leveraging fine-tuning techniques and comprehensive datasets, it aims to provide accurate and accessible learning support for students.
## How to Use
You can use the model directly with the `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("matsant01/STEMerald-2b")
model = AutoModelForCausalLM.from_pretrained("matsant01/STEMerald-2b")
input_text = "Question: What is the derivative of x^2? \nOptions: A. 4x B. 2*x^2 C. 2x D. 2\nAnswer:"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For the quantized version, use:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4"
)
tokenizer = AutoTokenizer.from_pretrained("matsant01/STEMerald-2b-4bit")
model = AutoModelForCausalLM.from_pretrained("matsant01/STEMerald-2b-4bit", quantization_config=quantization_config)
```
## Acknowledgements
We acknowledge the contributions of the EPFL and Stack Exchange communities for their invaluable datasets, and the Hugging Face team for their support and tools that made this project possible.
## Contact
For any questions or feedback, please contact:
- [Antonio Mari](https://github.com/antoniomari) ([email protected])
- [Matteo Santelmo](https://github.com/matsant01) ([email protected])
- [Stefano Viel](https://github.com/stefanoviel) ([email protected])
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
STEMerald-2b - GGUF
- Model creator: https://huggingface.co/matsant01/
- Original model: https://huggingface.co/matsant01/STEMerald-2b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [STEMerald-2b.Q2_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q2_K.gguf) | Q2_K | 1.08GB |
| [STEMerald-2b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [STEMerald-2b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [STEMerald-2b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [STEMerald-2b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [STEMerald-2b.Q3_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q3_K.gguf) | Q3_K | 1.29GB |
| [STEMerald-2b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [STEMerald-2b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [STEMerald-2b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [STEMerald-2b.Q4_0.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_0.gguf) | Q4_0 | 1.44GB |
| [STEMerald-2b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [STEMerald-2b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [STEMerald-2b.Q4_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_K.gguf) | Q4_K | 1.52GB |
| [STEMerald-2b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [STEMerald-2b.Q4_1.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q4_1.gguf) | Q4_1 | 1.56GB |
| [STEMerald-2b.Q5_0.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_0.gguf) | Q5_0 | 1.68GB |
| [STEMerald-2b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [STEMerald-2b.Q5_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_K.gguf) | Q5_K | 1.71GB |
| [STEMerald-2b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [STEMerald-2b.Q5_1.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q5_1.gguf) | Q5_1 | 1.79GB |
| [STEMerald-2b.Q6_K.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q6_K.gguf) | Q6_K | 1.92GB |
| [STEMerald-2b.Q8_0.gguf](https://huggingface.co/RichardErkhov/matsant01_-_STEMerald-2b-gguf/blob/main/STEMerald-2b.Q8_0.gguf) | Q8_0 | 2.49GB |
Original model description:
---
language: en
license: apache-2.0
datasets:
- derek-thomas/ScienceQA
- allenai/ai2_arc
tags:
- education
- stem
- computer science
- data science
- engineering
- biology
- chemistry
---
# STEMerald-2b
**Model name:** STEMerald-2b
**Model description:**
STEMerald-2b is a fine-tuned version of the Gemma-2b model, designed specifically for answering university-level STEM multiple-choice questions. This model leverages advanced fine-tuning techniques, including Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO), to enhance its accuracy and reliability in providing educational support.
<p align="center">
<img src="STEMerald_pic.jpeg" alt="STEMerald picture" width="400"/>
</p>
## Model Details
**Base Model:** [Gemma-2b](https://arxiv.org/abs/2403.08295)
**Architecture:** Decoder-only Language Model (Causal)
**Parameters:** 2.51 billion
**Quantized Version:** STEMerald-2b-4bit (with 4-bit NormalFloat)
**Training Framework:** PyTorch with Hugging Face Transformers
## Datasets
The model was fine-tuned on a variety of datasets tailored for STEM education, including:
- **EPFL Preference Pairs Dataset:** 1522 university-level STEM questions with 26k preference pairs, annotated by students using ChatGPT-3.5 with Chain-of-Thought (CoT).
- **Stack Exchange Dataset:** Questions and answers from various topics such as math, computer science, and engineering.
- **Orca-Math:** 200k grade-school math word problems to enhance reasoning capabilities.
- **EPFL MCQA Dataset**: Dataset of multiple-choice questions with explanation (for CoT) extracted from the winning pairs of EPFL preference pairs.
- **ScienceQA:** Multiple-choice questions on biology, physics, chemistry, economics, earth science, and engineering practices.
- **AI2 Reasoning Challenge (ARC):** Grade-school level multiple-choice science questions.
## Training Process
The training process for STEMerald-2b involved multiple steps:
1. **Supervised Fine-Tuning (SFT):** Initial training on datasets like Orca-Math to improve reasoning abilities.
2. **Direct Preference Optimization (DPO):** Training on preference pairs from EPFL and Stack Exchange datasets to align model outputs with preferred answers.
3. **MCQA Fine-Tuning:** Specialization for multiple-choice question answering using datasets like ScienceQA and ARC.
## Performance
The performance of STEMerald-2b was evaluated using various metrics:
- **Accuracy:** The model achieved high accuracy across multiple test sets, demonstrating its effectiveness in answering STEM questions.
- **Qualitative Evaluation:** The model's answers were evaluated for logical consistency, truthfulness, clarity, and coherence with the final answer.
### Results
| Model Version | Accuracy (Non-Quantized) | Accuracy (Quantized) |
|-----------------------------------|--------------------------|----------------------|
| it-ORCA-DPO-MCQA _(STEMerald-2b)_ | 0.750 | 0.720 |
| it-DPO-MCQA | 0.744 | 0.720 |
| it-MCQA | 0.736 | 0.700 |
| it-ORCA-MCQA | 0.722 | 0.714 |
| MCQA | 0.702 | 0.654 |
| DPO-MCQA | 0.694 | 0.674 |
| Gemma-it-OneShot | 0.546 | 0.520 |
| Gemma-it | 0.518 | 0.518 |
Micro-averaged accuracy over three MCQA test sets(EPFL MCQA, ScienceQA and ARC).
## Use Cases
STEMerald-2b can be utilized as a STEM course assistant, providing support in areas such as:
- Answering university-level multiple-choice STEM questions.
- Offering detailed explanations and reasoning for answers.
- Enhancing student engagement and learning efficiency during independent studies.
## Ethical Considerations
While STEMerald-2b aims to provide accurate and helpful responses, it is important to consider potential ethical implications:
- **Over-Reliance:** Students might become overly dependent on the model for answers, potentially affecting their independent learning and problem-solving skills.
- **Accuracy:** Although efforts were made to ensure the truthfulness of responses, there is still a possibility of incorrect answers. Teacher supervision is crucial.
## Limitations
- The model's performance may vary based on the specific context and nature of the questions.
- Quantization reduces memory footprint but may slightly affect accuracy.
## Conclusion
STEMerald-2b offers a promising solution for enhancing STEM education through advanced language model capabilities. By leveraging fine-tuning techniques and comprehensive datasets, it aims to provide accurate and accessible learning support for students.
## How to Use
You can use the model directly with the `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("matsant01/STEMerald-2b")
model = AutoModelForCausalLM.from_pretrained("matsant01/STEMerald-2b")
input_text = "Question: What is the derivative of x^2? \nOptions: A. 4x B. 2*x^2 C. 2x D. 2\nAnswer:"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For the quantized version, use:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4"
)
tokenizer = AutoTokenizer.from_pretrained("matsant01/STEMerald-2b-4bit")
model = AutoModelForCausalLM.from_pretrained("matsant01/STEMerald-2b-4bit", quantization_config=quantization_config)
```
## Acknowledgements
We acknowledge the contributions of the EPFL and Stack Exchange communities for their invaluable datasets, and the Hugging Face team for their support and tools that made this project possible.
## Contact
For any questions or feedback, please contact:
- [Antonio Mari](https://github.com/antoniomari) ([email protected])
- [Matteo Santelmo](https://github.com/matsant01) ([email protected])
- [Stefano Viel](https://github.com/stefanoviel) ([email protected])
|
{}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,359 |
XSY/albert-base-v2-imdb-calssification
|
XSY
|
text-classification
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-11-13T09:10:38+00:00
| 638 | 0 |
---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: albert-base-v2-imdb-calssification
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- type: accuracy
value: 0.93612
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-imdb-calssification
label_0: negative
label_1: positive
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1983
- Accuracy: 0.9361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.26 | 1.0 | 1563 | 0.1983 | 0.9361 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-imdb-calssification
label_0: negative
label_1: positive
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1983
- Accuracy: 0.9361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.26 | 1.0 | 1563 | 0.1983 | 0.9361 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "albert-base-v2-imdb-calssification", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93612, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,360 |
dnzblgn/Customer-Reviews-Classification
|
dnzblgn
|
text-classification
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-04T18:59:00Z |
2024-12-31T16:04:13+00:00
| 32 | 0 |
---
base_model: distilbert/distilbert-base-uncased
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-classification
widget:
- text: The product arrived on time and was exactly as described.
safetensors: true
---
### Categories:
### label_mapping = {
"shipping_and_delivery": 0,
"customer_service": 1,
"price_and_value": 2,
"quality_and_performance": 3,
"use_and_design": 4,
"other": 5
}
### Model Description
This fine-tuned DistilBERT model is specifically designed for document classification. It classifies customer feedback into six predefined categories: Shipping and Delivery, Customer Service, Price and Value, Quality and Performance, Use and Design, and Other. By leveraging the transformer-based architecture of DistilBERT, the model efficiently handles the syntactic patterns of text, providing accurate document classification based on content, style, and structure.
- **Model type:** DistilBERT (fine-tuned for text classification)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** distilbert/distilbert-base-uncased
## Bias, Risks, and Limitations
While the model achieves high accuracy across the six categories, it has limitations when dealing with overlapping categories or multiple labels within a single document. The model is designed for single-label classification, meaning it can only detect one label per document. If a document contains features of multiple categories (e.g., both 'Quality and Performance' and 'Price and Value'), the model may struggle to correctly identify both and will predict only one category, potentially leading to misclassification.
### Recommendations
Users (both direct and downstream) should be aware of the model's single-label prediction limitation. In cases where a document contains features of multiple categories, additional models or multi-label classification techniques should be considered.
### Training Data
A custom synthetic dataset was created for this task, focusing on the structural features of text. The dataset provides examples from six categories, helping the model learn from both the syntactic organization and the meaning of the text.
### Training Hyperparameters
Model: distilbert/distilbert-base-uncased
Learning Rate: 3e-5
Epochs: 7
Train Batch Size: 16
Gradient Accumulation Steps: 2
Weight Decay: 0.015
Warm-up Ratio: 0.1
### Evaluation
The model was evaluated using a custom dataset representing the same six document categories. Performance was measured based on accuracy, precision, recall, and F1-score across the categories.
### Metrics
Accuracy: 0.947
Precision: 0.948
Recall: 0.948
F1-Score: 0.948
### For access to the synthetic dataset used, please contact: [[email protected]].
## How to Use:
Here is an example of how to use this model for inference:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="dnzblgn/Customer-Reviews-Classification")
result = classifier("The product arrived on time and was exactly as described.")
print(result)
| null |
Non_BioNLP
|
### Categories:
### label_mapping = {
"shipping_and_delivery": 0,
"customer_service": 1,
"price_and_value": 2,
"quality_and_performance": 3,
"use_and_design": 4,
"other": 5
}
### Model Description
This fine-tuned DistilBERT model is specifically designed for document classification. It classifies customer feedback into six predefined categories: Shipping and Delivery, Customer Service, Price and Value, Quality and Performance, Use and Design, and Other. By leveraging the transformer-based architecture of DistilBERT, the model efficiently handles the syntactic patterns of text, providing accurate document classification based on content, style, and structure.
- **Model type:** DistilBERT (fine-tuned for text classification)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** distilbert/distilbert-base-uncased
## Bias, Risks, and Limitations
While the model achieves high accuracy across the six categories, it has limitations when dealing with overlapping categories or multiple labels within a single document. The model is designed for single-label classification, meaning it can only detect one label per document. If a document contains features of multiple categories (e.g., both 'Quality and Performance' and 'Price and Value'), the model may struggle to correctly identify both and will predict only one category, potentially leading to misclassification.
### Recommendations
Users (both direct and downstream) should be aware of the model's single-label prediction limitation. In cases where a document contains features of multiple categories, additional models or multi-label classification techniques should be considered.
### Training Data
A custom synthetic dataset was created for this task, focusing on the structural features of text. The dataset provides examples from six categories, helping the model learn from both the syntactic organization and the meaning of the text.
### Training Hyperparameters
Model: distilbert/distilbert-base-uncased
Learning Rate: 3e-5
Epochs: 7
Train Batch Size: 16
Gradient Accumulation Steps: 2
Weight Decay: 0.015
Warm-up Ratio: 0.1
### Evaluation
The model was evaluated using a custom dataset representing the same six document categories. Performance was measured based on accuracy, precision, recall, and F1-score across the categories.
### Metrics
Accuracy: 0.947
Precision: 0.948
Recall: 0.948
F1-Score: 0.948
### For access to the synthetic dataset used, please contact: [[email protected]].
## How to Use:
Here is an example of how to use this model for inference:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="dnzblgn/Customer-Reviews-Classification")
result = classifier("The product arrived on time and was exactly as described.")
print(result)
|
{"base_model": "distilbert/distilbert-base-uncased", "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "widget": [{"text": "The product arrived on time and was exactly as described."}], "safetensors": true}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,361 |
SEBIS/code_trans_t5_base_api_generation_multitask
|
SEBIS
|
summarization
|
[
"transformers",
"pytorch",
"jax",
"t5",
"feature-extraction",
"summarization",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-06-23T03:59:20+00:00
| 127 | 1 |
---
tags:
- summarization
widget:
- text: parse the uses licence node of this package , if any , and returns the license
definition if theres
---
# CodeTrans model for api recommendation generation
Pretrained model for api recommendation generation using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate api usage for the java programming tasks.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/api%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 480,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 68.71 |
| CodeTrans-ST-Base | 70.45 |
| CodeTrans-TF-Small | 68.90 |
| CodeTrans-TF-Base | 72.11 |
| CodeTrans-TF-Large | 73.26 |
| CodeTrans-MT-Small | 58.43 |
| CodeTrans-MT-Base | 67.97 |
| CodeTrans-MT-Large | 72.29 |
| CodeTrans-MT-TF-Small | 69.29 |
| CodeTrans-MT-TF-Base | 72.89 |
| CodeTrans-MT-TF-Large | **73.39** |
| State of the art | 54.42 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
| null |
Non_BioNLP
|
# CodeTrans model for api recommendation generation
Pretrained model for api recommendation generation using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate api usage for the java programming tasks.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_api_generation_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/api%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 480,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 68.71 |
| CodeTrans-ST-Base | 70.45 |
| CodeTrans-TF-Small | 68.90 |
| CodeTrans-TF-Base | 72.11 |
| CodeTrans-TF-Large | 73.26 |
| CodeTrans-MT-Small | 58.43 |
| CodeTrans-MT-Base | 67.97 |
| CodeTrans-MT-Large | 72.29 |
| CodeTrans-MT-TF-Small | 69.29 |
| CodeTrans-MT-TF-Base | 72.89 |
| CodeTrans-MT-TF-Large | **73.39** |
| State of the art | 54.42 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
{"tags": ["summarization"], "widget": [{"text": "parse the uses licence node of this package , if any , and returns the license definition if theres"}]}
|
task
|
[
"SUMMARIZATION"
] | 45,362 |
enteramine/mt5-base-finetuned-v1
|
enteramine
|
summarization
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-10T18:14:51Z |
2022-08-17T01:01:51+00:00
| 39 | 0 |
---
language:
- fa
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
widget:
- text: 'در روزهای گذشته انتشار تصاویر کودکان و نوجوانانی که از والدینشان جدا شده
و در اردوگاههای موقت در ایالتهای مرزی آمریکا نگهداری میشوند، انتقادات گستردهای
را در داخل و خارج آمریکا از سیاست ضد مهاجرتی ترامپ، برانگیخته است. به گزارش این
اعتراضات به حدی است که حتی "ملانیا ترامپ" بانوی اول آمریکا نیز نتوانست از آن دفاع
کند و این اقدام را محکوم کرد. ماجرا از این قرار است که در یک ماه گذشته دولت آمریکا
با ارایه تفسیر موسعی از قانون مهاجرت به آمریکا بیش از 2200 فرزند را از والدین
مهاجر آنها جدا کرد. بر اساس این تفسیر از قانون ورود غیرقانونی به خاک ایالات متحده
آمریکا جرم محسوب میشود و به همین خاطر افرادی که به صورت غیرقانونی وارد خاک آمریکا
شدهاند برای محاکمه دستگیر میشوند و فرزندانشان از آنها جدا میشوند. این جداسازی
و انتشار تصاویری از صدها کودک و نوجوان و حتی فرزندان خردسال زیر 2 سال که از والدین
خود جدا شده اند صدای بسیاری را در آمریکا و جهان درآورده است. گفتنی است جداسازی
والدین و فرزندان بر مبنای قانون جدیدی انجام نمیشود بلکه دولت ترامپ تلاش دارد
قانونی را که در دورههای گذشته نسبت به آن اغماض میشد، "سفت و سخت" به مورد اجرا
بگذارد؛ تنها تغییری که دولت ترامپ نسبت به دولت اوباما درباره قانون دارد، "تفسیر
موسع" آن از "وقوع جرم" از سوی مهاجران غیرقانونی است، بدین گونه که دولت ترامپ نفس
ورود غیرقانونی به خاک آمریکا را جرم انگاشته و مهاجران را برای محاکمه و اخراج دستگیر
میکند اما در دولتهای گذشته نسبت به این ورود با اغماض بیشتری برخورد میشد و تنها
در صورتی که مهاجرغیرقانونی اقدامی مجرمانه را در خاک آمریکا مرتکب میشد، نسبت به
دستگیری و اخراج فرد مزبور اقدام میشد. دموکراتها این اقدام دولت ترامپ را غیراخلاقی
و "شیطانی" توصیف کردهاند و حتی "لورا بوش" همسر "جورج دبلیو بوش" رییس جمهور اسبق
آمریکا با اعلام انزجار از این اقدام، گفته طاقت دیدن صحنه ضجه و گریه کودکان خردسال
پس از جدایی آنها از والدینشان را ندارد. این اعتراضات در حالی است که ترامپ از
این اقدام دفاع کرده و گفته راهی جز این نیست. او دیروز بار دیگر با دفاع از سیاست
جدید دولت آمریکا برضد مهاجران گفت که او اجازه نخواهد داد آمریکا نیز مثل اروپا
به "اردوگاه پناهجویان" تبدیل شود. در روزهای گذشته در برخی شهرهای آمریکا تظاهراتهایی
بر ضد جداسازی فرزندان و والدین مهاجر برگزار شده است و فعالان اجتماعی و حقوق بشر
در آمریکا به این اقبدام به شدت اعتراض کرده و خوستار توقف اجرای این طرح شدهاند.
"جف سشنز" وزیر دادگستری کابینه ترامپ هم در واکنش به مقایسه این طرح با اقدامات
دوره "آلمان نازی" - در جداسازی والدین از فرزندان در اردوگاههای مرگ یا کار اجباری-
گفته است این طرح به هیچ وجه قابل مقایسه با اقدامات دوره آلمان نازی نیست. پس از
اینکه "مایکل هایدن" رییس سابق سازمان اطلاعات مرکزی آمریکا (سیا) در توییتر خود
این اقدام را با اردوگاههای آلمان نازی مقایسه کرد و به شدت آن را محکوم کرد وزیر
دادگستری کابینه ترامپ دیروز در مصاحبهای با فاکسنیوز با دفاع از اجرای سختگیرانه
قانون ضد مهاجرت غیرقانونی به خاک آمریکا این مقایسه را "بزرگنمایی" دانست چون
به گفته او: در آلمان نازی، جلوی خروج یهودیان از کشور را میگرفتند." کنگره آمریکا
قرار است در هفته جاری درباره یک قانون جدید مهاجرتی به تصمیمگیری برسد.'
- text: 'وزرای خارجه اسراییل و ایران در دومین سالگرد شهادت سردار سپهبد "قاسم سلیمانی"
در توییتر جدال کردند. به گزارش ، در پی توییت اخیر "حسین امیر عبدالهیان" وزیر امور
خارجه جمهوری اسلامی ایران درباره تهدیدات رژیم اسراییل به اقدام نظامی علیه ایران،
"یائیر لاپید" وزیر خارجه اسراییل امروز از طریق توییتر با بازنشر توییت امیرعبدالهیان
به توییت او پاسخ داد. امیر عبدالهیان دیروز در توییتی با اشاره به مصاحبه اخیر لاپید
مبنی بر توانایی غیرقابل تصور اسراییل برای حمله نظامی علیه ایران نوشته بود:" اظهارات
آشفته وزير خارجه رژيم جعلی اسراییل در قبال ملت بزرگ ایران، مصداق این ضرب المثل
معروف ایرانیست که« شتر در خواب بیند پنبه دانه، گهی لپ لپ خورد گه دانه دانه». با
اقتدار و عقلانیت از حقوق، منافع وپیشرفت ملت دفاع می کنیم. صهیونیسم جایی در آینده
جهان ندارد." لاپید روز جمعه در مصاحبه ای گفته بود رژیم تل آویو توانایی هایی برای
اقدام نظامی علیه ایران دارد که در مخیله هیچ کسی نمی گنجد و اگر منافع تل آویو از
جانب ایران تهدید شود، قادر است به صورت یکجانبه علیه ایران اقدام کند. امروز لاپید
با بازنشر توییت امیرعبدالهیان که در واکنش به اظهارات تهدید آمیز اخیر او علیه ایران
نوشته بود در رشته توییتی نوشت:" رژیم افراطی ایران اسراییل را تهدید به نابودی می
کند، اما همچنان در این نبرد شکست خواهد خورد. حکومت شکست خورده ایران این کشور را
از درون ویران می کند. به قول شاعر ایرانی سعدی: « اصل بد نیکو نگردد زانکه بنیادش
بد است. »." او در توییتی دیگر افزود:" ایرانیان باید بدانند که رژیم آنها مسبب زندگی
فلاکت بار آنهاست. دولت اسراییل قوی است و اجازه نخواهد داد که شهروندانش آسیب ببینند."'
model-index:
- name: mt5-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-v1
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the Persian News dataset.
It achieves the following results on the evaluation set:
- Loss: 1.087988
- Rouge1: 1.2887
- Rouge2: 0.1861
- Rougel: 1.2862
- Rougelsum: 1.2818
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.223400 | 1 | 20437| 1.153162 | 1.0624 | 0.1351 | 1.0668 | 1.0740 |
| 1.202900 | 2 | 40874| 1.086163 | 1.1579 | 0.1426 | 1.1724 | 1.1599 |
| 1.173500 | 3 | 61311| 1.087988 | 1.2887 | 0.1861 | 1.2862 | 1.2818 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-v1
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the Persian News dataset.
It achieves the following results on the evaluation set:
- Loss: 1.087988
- Rouge1: 1.2887
- Rouge2: 0.1861
- Rougel: 1.2862
- Rougelsum: 1.2818
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.223400 | 1 | 20437| 1.153162 | 1.0624 | 0.1351 | 1.0668 | 1.0740 |
| 1.202900 | 2 | 40874| 1.086163 | 1.1579 | 0.1426 | 1.1724 | 1.1599 |
| 1.173500 | 3 | 61311| 1.087988 | 1.2887 | 0.1861 | 1.2862 | 1.2818 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
{"language": ["fa"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "widget": [{"text": "در روزهای گذشته انتشار تصاویر کودکان و نوجوانانی که از والدینشان جدا شده و در اردوگاههای موقت در ایالتهای مرزی آمریکا نگهداری میشوند، انتقادات گستردهای را در داخل و خارج آمریکا از سیاست ضد مهاجرتی ترامپ، برانگیخته است. به گزارش این اعتراضات به حدی است که حتی \"ملانیا ترامپ\" بانوی اول آمریکا نیز نتوانست از آن دفاع کند و این اقدام را محکوم کرد. ماجرا از این قرار است که در یک ماه گذشته دولت آمریکا با ارایه تفسیر موسعی از قانون مهاجرت به آمریکا بیش از 2200 فرزند را از والدین مهاجر آنها جدا کرد. بر اساس این تفسیر از قانون ورود غیرقانونی به خاک ایالات متحده آمریکا جرم محسوب میشود و به همین خاطر افرادی که به صورت غیرقانونی وارد خاک آمریکا شدهاند برای محاکمه دستگیر میشوند و فرزندانشان از آنها جدا میشوند. این جداسازی و انتشار تصاویری از صدها کودک و نوجوان و حتی فرزندان خردسال زیر 2 سال که از والدین خود جدا شده اند صدای بسیاری را در آمریکا و جهان درآورده است. گفتنی است جداسازی والدین و فرزندان بر مبنای قانون جدیدی انجام نمیشود بلکه دولت ترامپ تلاش دارد قانونی را که در دورههای گذشته نسبت به آن اغماض میشد، \"سفت و سخت\" به مورد اجرا بگذارد؛ تنها تغییری که دولت ترامپ نسبت به دولت اوباما درباره قانون دارد، \"تفسیر موسع\" آن از \"وقوع جرم\" از سوی مهاجران غیرقانونی است، بدین گونه که دولت ترامپ نفس ورود غیرقانونی به خاک آمریکا را جرم انگاشته و مهاجران را برای محاکمه و اخراج دستگیر میکند اما در دولتهای گذشته نسبت به این ورود با اغماض بیشتری برخورد میشد و تنها در صورتی که مهاجرغیرقانونی اقدامی مجرمانه را در خاک آمریکا مرتکب میشد، نسبت به دستگیری و اخراج فرد مزبور اقدام میشد. دموکراتها این اقدام دولت ترامپ را غیراخلاقی و \"شیطانی\" توصیف کردهاند و حتی \"لورا بوش\" همسر \"جورج دبلیو بوش\" رییس جمهور اسبق آمریکا با اعلام انزجار از این اقدام، گفته طاقت دیدن صحنه ضجه و گریه کودکان خردسال پس از جدایی آنها از والدینشان را ندارد. این اعتراضات در حالی است که ترامپ از این اقدام دفاع کرده و گفته راهی جز این نیست. او دیروز بار دیگر با دفاع از سیاست جدید دولت آمریکا برضد مهاجران گفت که او اجازه نخواهد داد آمریکا نیز مثل اروپا به \"اردوگاه پناهجویان\" تبدیل شود. در روزهای گذشته در برخی شهرهای آمریکا تظاهراتهایی بر ضد جداسازی فرزندان و والدین مهاجر برگزار شده است و فعالان اجتماعی و حقوق بشر در آمریکا به این اقبدام به شدت اعتراض کرده و خوستار توقف اجرای این طرح شدهاند. \"جف سشنز\" وزیر دادگستری کابینه ترامپ هم در واکنش به مقایسه این طرح با اقدامات دوره \"آلمان نازی\" - در جداسازی والدین از فرزندان در اردوگاههای مرگ یا کار اجباری- گفته است این طرح به هیچ وجه قابل مقایسه با اقدامات دوره آلمان نازی نیست. پس از اینکه \"مایکل هایدن\" رییس سابق سازمان اطلاعات مرکزی آمریکا (سیا) در توییتر خود این اقدام را با اردوگاههای آلمان نازی مقایسه کرد و به شدت آن را محکوم کرد وزیر دادگستری کابینه ترامپ دیروز در مصاحبهای با فاکسنیوز با دفاع از اجرای سختگیرانه قانون ضد مهاجرت غیرقانونی به خاک آمریکا این مقایسه را \"بزرگنمایی\" دانست چون به گفته او: در آلمان نازی، جلوی خروج یهودیان از کشور را میگرفتند.\" کنگره آمریکا قرار است در هفته جاری درباره یک قانون جدید مهاجرتی به تصمیمگیری برسد."}, {"text": "وزرای خارجه اسراییل و ایران در دومین سالگرد شهادت سردار سپهبد \"قاسم سلیمانی\" در توییتر جدال کردند. به گزارش ، در پی توییت اخیر \"حسین امیر عبدالهیان\" وزیر امور خارجه جمهوری اسلامی ایران درباره تهدیدات رژیم اسراییل به اقدام نظامی علیه ایران، \"یائیر لاپید\" وزیر خارجه اسراییل امروز از طریق توییتر با بازنشر توییت امیرعبدالهیان به توییت او پاسخ داد. امیر عبدالهیان دیروز در توییتی با اشاره به مصاحبه اخیر لاپید مبنی بر توانایی غیرقابل تصور اسراییل برای حمله نظامی علیه ایران نوشته بود:\" اظهارات آشفته وزير خارجه رژيم جعلی اسراییل در قبال ملت بزرگ ایران، مصداق این ضرب المثل معروف ایرانیست که« شتر در خواب بیند پنبه دانه، گهی لپ لپ خورد گه دانه دانه». با اقتدار و عقلانیت از حقوق، منافع وپیشرفت ملت دفاع می کنیم. صهیونیسم جایی در آینده جهان ندارد.\" لاپید روز جمعه در مصاحبه ای گفته بود رژیم تل آویو توانایی هایی برای اقدام نظامی علیه ایران دارد که در مخیله هیچ کسی نمی گنجد و اگر منافع تل آویو از جانب ایران تهدید شود، قادر است به صورت یکجانبه علیه ایران اقدام کند. امروز لاپید با بازنشر توییت امیرعبدالهیان که در واکنش به اظهارات تهدید آمیز اخیر او علیه ایران نوشته بود در رشته توییتی نوشت:\" رژیم افراطی ایران اسراییل را تهدید به نابودی می کند، اما همچنان در این نبرد شکست خواهد خورد. حکومت شکست خورده ایران این کشور را از درون ویران می کند. به قول شاعر ایرانی سعدی: « اصل بد نیکو نگردد زانکه بنیادش بد است. ».\" او در توییتی دیگر افزود:\" ایرانیان باید بدانند که رژیم آنها مسبب زندگی فلاکت بار آنهاست. دولت اسراییل قوی است و اجازه نخواهد داد که شهروندانش آسیب ببینند.\""}], "model-index": [{"name": "mt5-base-finetuned", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,363 |
jmainformatique/autotrain-1-0
|
jmainformatique
|
text-classification
|
[
"tensorboard",
"safetensors",
"roberta",
"autotrain",
"text-classification",
"base_model:ehsanaghaei/SecureBERT",
"base_model:finetune:ehsanaghaei/SecureBERT",
"region:us"
] | 2024-10-13T12:56:04Z |
2024-10-13T13:02:33+00:00
| 4 | 0 |
---
base_model: ehsanaghaei/SecureBERT
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.40988606214523315
f1: 0.8620689655172413
precision: 0.8333333333333334
recall: 0.8928571428571429
auc: 0.8914835164835164
accuracy: 0.8048780487804879
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.40988606214523315
f1: 0.8620689655172413
precision: 0.8333333333333334
recall: 0.8928571428571429
auc: 0.8914835164835164
accuracy: 0.8048780487804879
|
{"base_model": "ehsanaghaei/SecureBERT", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,364 |
slone/mbart-large-51-myv-mul-v1
|
slone
|
translation
|
[
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"erzya",
"mordovian",
"translation",
"myv",
"ru",
"fi",
"de",
"es",
"en",
"hi",
"zh",
"tr",
"uk",
"fr",
"ar",
"dataset:slone/myv_ru_2022",
"dataset:yhavinga/ccmatrix",
"arxiv:2209.09368",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-09-15T05:55:21Z |
2023-06-13T09:38:10+00:00
| 19 | 2 |
---
datasets:
- slone/myv_ru_2022
- yhavinga/ccmatrix
language:
- myv
- ru
- fi
- de
- es
- en
- hi
- zh
- tr
- uk
- fr
- ar
license: cc-by-sa-4.0
tags:
- erzya
- mordovian
- translation
---
This a model to translate texts to the Erzya language (`myv`, cyrillic script) from 11 other languages: `ru,fi,de,es,en,hi,zh,tr,uk,fr,ar`. See its [demo](https://huggingface.co/spaces/slone/myv-translation-2022-demo)!
It is described in the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368).
This model is based on [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50), but with updated vocabulary and checkpoint:
- Added an extra language token `myv_XX` and 19K new BPE tokens for the Erzya language;
- Fine-tuned to translate from Erzya: first to Russian, then to all 11 languages.
The following code can be used to run translation using the model
```Python
from transformers import MBartForConditionalGeneration, MBart50Tokenizer
def fix_tokenizer(tokenizer):
""" Add a new language token to the tokenizer vocabulary (this should be done each time after its initialization) """
old_len = len(tokenizer) - int('myv_XX' in tokenizer.added_tokens_encoder)
tokenizer.lang_code_to_id['myv_XX'] = old_len-1
tokenizer.id_to_lang_code[old_len-1] = 'myv_XX'
tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset
tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
if 'myv_XX' not in tokenizer._additional_special_tokens:
tokenizer._additional_special_tokens.append('myv_XX')
tokenizer.added_tokens_encoder = {}
def translate(text, model, tokenizer, src='ru_RU', trg='myv_XX', max_length='auto', num_beams=3, repetition_penalty=5.0, train_mode=False, n_out=None, **kwargs):
tokenizer.src_lang = src
encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=1024)
if max_length == 'auto':
max_length = int(32 + 1.5 * encoded.input_ids.shape[1])
if train_mode:
model.train()
else:
model.eval()
generated_tokens = model.generate(
**encoded.to(model.device),
forced_bos_token_id=tokenizer.lang_code_to_id[trg],
max_length=max_length,
num_beams=num_beams,
repetition_penalty=repetition_penalty,
num_return_sequences=n_out or 1,
**kwargs
)
out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
if isinstance(text, str) and n_out is None:
return out[0]
return out
mname = 'slone/mbart-large-51-myv-mul-v1'
model = MBartForConditionalGeneration.from_pretrained(mname)
tokenizer = MBart50Tokenizer.from_pretrained(mname)
fix_tokenizer(tokenizer)
print(translate('Шумбрат, киска!', model, tokenizer, src='myv_XX', trg='ru_RU'))
# Привет, собака! # действительно, "киска" с эрзянского переводится именно так
print(translate('Шумбрат, киска!', model, tokenizer, src='myv_XX', trg='en_XX'))
# Hi, dog!
```
| null |
Non_BioNLP
|
This a model to translate texts to the Erzya language (`myv`, cyrillic script) from 11 other languages: `ru,fi,de,es,en,hi,zh,tr,uk,fr,ar`. See its [demo](https://huggingface.co/spaces/slone/myv-translation-2022-demo)!
It is described in the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368).
This model is based on [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50), but with updated vocabulary and checkpoint:
- Added an extra language token `myv_XX` and 19K new BPE tokens for the Erzya language;
- Fine-tuned to translate from Erzya: first to Russian, then to all 11 languages.
The following code can be used to run translation using the model
```Python
from transformers import MBartForConditionalGeneration, MBart50Tokenizer
def fix_tokenizer(tokenizer):
""" Add a new language token to the tokenizer vocabulary (this should be done each time after its initialization) """
old_len = len(tokenizer) - int('myv_XX' in tokenizer.added_tokens_encoder)
tokenizer.lang_code_to_id['myv_XX'] = old_len-1
tokenizer.id_to_lang_code[old_len-1] = 'myv_XX'
tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset
tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
if 'myv_XX' not in tokenizer._additional_special_tokens:
tokenizer._additional_special_tokens.append('myv_XX')
tokenizer.added_tokens_encoder = {}
def translate(text, model, tokenizer, src='ru_RU', trg='myv_XX', max_length='auto', num_beams=3, repetition_penalty=5.0, train_mode=False, n_out=None, **kwargs):
tokenizer.src_lang = src
encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=1024)
if max_length == 'auto':
max_length = int(32 + 1.5 * encoded.input_ids.shape[1])
if train_mode:
model.train()
else:
model.eval()
generated_tokens = model.generate(
**encoded.to(model.device),
forced_bos_token_id=tokenizer.lang_code_to_id[trg],
max_length=max_length,
num_beams=num_beams,
repetition_penalty=repetition_penalty,
num_return_sequences=n_out or 1,
**kwargs
)
out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
if isinstance(text, str) and n_out is None:
return out[0]
return out
mname = 'slone/mbart-large-51-myv-mul-v1'
model = MBartForConditionalGeneration.from_pretrained(mname)
tokenizer = MBart50Tokenizer.from_pretrained(mname)
fix_tokenizer(tokenizer)
print(translate('Шумбрат, киска!', model, tokenizer, src='myv_XX', trg='ru_RU'))
# Привет, собака! # действительно, "киска" с эрзянского переводится именно так
print(translate('Шумбрат, киска!', model, tokenizer, src='myv_XX', trg='en_XX'))
# Hi, dog!
```
|
{"datasets": ["slone/myv_ru_2022", "yhavinga/ccmatrix"], "language": ["myv", "ru", "fi", "de", "es", "en", "hi", "zh", "tr", "uk", "fr", "ar"], "license": "cc-by-sa-4.0", "tags": ["erzya", "mordovian", "translation"]}
|
task
|
[
"TRANSLATION"
] | 45,365 |
gokulsrinivasagan/bert_uncased_L-2_H-256_A-4_sst2
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google/bert_uncased_L-2_H-256_A-4",
"base_model:finetune:google/bert_uncased_L-2_H-256_A-4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-04T17:21:05Z |
2024-12-04T17:24:26+00:00
| 5 | 0 |
---
base_model: google/bert_uncased_L-2_H-256_A-4
datasets:
- glue
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_uncased_L-2_H-256_A-4_sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- type: accuracy
value: 0.8211009174311926
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-2_H-256_A-4_sst2
This model is a fine-tuned version of [google/bert_uncased_L-2_H-256_A-4](https://huggingface.co/google/bert_uncased_L-2_H-256_A-4) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4074
- Accuracy: 0.8211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4405 | 1.0 | 264 | 0.4195 | 0.8096 |
| 0.2935 | 2.0 | 528 | 0.4074 | 0.8211 |
| 0.2371 | 3.0 | 792 | 0.4241 | 0.8291 |
| 0.2044 | 4.0 | 1056 | 0.4573 | 0.8291 |
| 0.1805 | 5.0 | 1320 | 0.4635 | 0.8268 |
| 0.163 | 6.0 | 1584 | 0.4915 | 0.8326 |
| 0.1486 | 7.0 | 1848 | 0.5187 | 0.8326 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-2_H-256_A-4_sst2
This model is a fine-tuned version of [google/bert_uncased_L-2_H-256_A-4](https://huggingface.co/google/bert_uncased_L-2_H-256_A-4) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4074
- Accuracy: 0.8211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4405 | 1.0 | 264 | 0.4195 | 0.8096 |
| 0.2935 | 2.0 | 528 | 0.4074 | 0.8211 |
| 0.2371 | 3.0 | 792 | 0.4241 | 0.8291 |
| 0.2044 | 4.0 | 1056 | 0.4573 | 0.8291 |
| 0.1805 | 5.0 | 1320 | 0.4635 | 0.8268 |
| 0.163 | 6.0 | 1584 | 0.4915 | 0.8326 |
| 0.1486 | 7.0 | 1848 | 0.5187 | 0.8326 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "google/bert_uncased_L-2_H-256_A-4", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_uncased_L-2_H-256_A-4_sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE SST2", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.8211009174311926, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,366 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.