id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Luluuu/tweet-summarization-llama-2-finetuned
|
Luluuu
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:Salesforce/dialogstudio",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | 2024-03-07T07:37:30Z |
2024-03-18T15:06:27+00:00
| 4 | 0 |
---
base_model: meta-llama/Llama-2-7b-hf
datasets:
- Salesforce/dialogstudio
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: tweet-summarization-llama-2-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet-summarization-llama-2-finetuned
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the Salesforce/dialogstudio dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4742 | 1.0 | 5273 | 1.5088 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet-summarization-llama-2-finetuned
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the Salesforce/dialogstudio dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4742 | 1.0 | 5273 | 1.5088 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"base_model": "meta-llama/Llama-2-7b-hf", "datasets": ["Salesforce/dialogstudio"], "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "model-index": [{"name": "tweet-summarization-llama-2-finetuned", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,367 |
mesolitica/t5-tiny-standard-bahasa-cased
|
mesolitica
|
feature-extraction
|
[
"transformers",
"pytorch",
"t5",
"feature-extraction",
"ms",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-10-06T15:24:46+00:00
| 25 | 0 |
---
language: ms
---
# t5-tiny-standard-bahasa-cased
Pretrained T5 tiny standard language model for Malay.
## Pretraining Corpus
`t5-tiny-standard-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
| null |
Non_BioNLP
|
# t5-tiny-standard-bahasa-cased
Pretrained T5 tiny standard language model for Malay.
## Pretraining Corpus
`t5-tiny-standard-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
|
{"language": "ms"}
|
task
|
[
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | 45,368 |
Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_5bpw_exl2
|
Zoyd
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"merges",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:quantized:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | 2024-05-26T09:19:29Z |
2024-05-26T10:12:58+00:00
| 14 | 0 |
---
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
datasets:
- teknium/OpenHermes-2.5
language:
- en
license: apache-2.0
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- merges
widget:
- example_title: Hermes 2 Pro Llama-3 Instruct Merge
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
results: []
---
**Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_25bpw_exl2)**</center> | <center>4933 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_0bpw_exl2)**</center> | <center>6497 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_5bpw_exl2)**</center> | <center>6910 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-8_0bpw_exl2)**</center> | <center>8049 MB</center> | <center>8</center> |
# - Hermes-2 Θ Llama-3 8B

## Model Description
Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:

### Chat with a Meta-Cognitive Entity

### Ask for a structured JSON output:

# Prompt Format
Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5529|± |0.0145|
| | |acc_norm|0.5870|± |0.0144|
|arc_easy | 0|acc |0.8371|± |0.0076|
| | |acc_norm|0.8144|± |0.0080|
|boolq | 1|acc |0.8599|± |0.0061|
|hellaswag | 0|acc |0.6133|± |0.0049|
| | |acc_norm|0.7989|± |0.0040|
|openbookqa | 0|acc |0.3940|± |0.0219|
| | |acc_norm|0.4680|± |0.0223|
|piqa | 0|acc |0.8063|± |0.0092|
| | |acc_norm|0.8156|± |0.0090|
|winogrande | 0|acc |0.7372|± |0.0124|
```
Average: 72.59
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2441|± |0.0270|
|agieval_logiqa_en | 0|acc |0.3687|± |0.0189|
| | |acc_norm|0.3840|± |0.0191|
|agieval_lsat_ar | 0|acc |0.2304|± |0.0278|
| | |acc_norm|0.2174|± |0.0273|
|agieval_lsat_lr | 0|acc |0.5471|± |0.0221|
| | |acc_norm|0.5373|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.7670|± |0.0295|
| | |acc_norm|0.7379|± |0.0307|
|agieval_sat_en_without_passage| 0|acc |0.4417|± |0.0347|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_math | 0|acc |0.4000|± |0.0331|
| | |acc_norm|0.3455|± |0.0321|
```
Average: 44.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6000|± |0.0356|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6585|± |0.0247|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2340|± |0.0224|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|± |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2057|± |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5367|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4040|± |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.4970|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7075|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4821|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2295|± |0.0133|
|bigbench_snarks | 0|multiple_choice_grade|0.6906|± |0.0345|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5375|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.6270|± |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2216|± |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1594|± |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5367|± |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
| null |
Non_BioNLP
|
**Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_25bpw_exl2)**</center> | <center>4933 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_0bpw_exl2)**</center> | <center>6497 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_5bpw_exl2)**</center> | <center>6910 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-8_0bpw_exl2)**</center> | <center>8049 MB</center> | <center>8</center> |
# - Hermes-2 Θ Llama-3 8B

## Model Description
Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:

### Chat with a Meta-Cognitive Entity

### Ask for a structured JSON output:

# Prompt Format
Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5529|± |0.0145|
| | |acc_norm|0.5870|± |0.0144|
|arc_easy | 0|acc |0.8371|± |0.0076|
| | |acc_norm|0.8144|± |0.0080|
|boolq | 1|acc |0.8599|± |0.0061|
|hellaswag | 0|acc |0.6133|± |0.0049|
| | |acc_norm|0.7989|± |0.0040|
|openbookqa | 0|acc |0.3940|± |0.0219|
| | |acc_norm|0.4680|± |0.0223|
|piqa | 0|acc |0.8063|± |0.0092|
| | |acc_norm|0.8156|± |0.0090|
|winogrande | 0|acc |0.7372|± |0.0124|
```
Average: 72.59
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2441|± |0.0270|
|agieval_logiqa_en | 0|acc |0.3687|± |0.0189|
| | |acc_norm|0.3840|± |0.0191|
|agieval_lsat_ar | 0|acc |0.2304|± |0.0278|
| | |acc_norm|0.2174|± |0.0273|
|agieval_lsat_lr | 0|acc |0.5471|± |0.0221|
| | |acc_norm|0.5373|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.7670|± |0.0295|
| | |acc_norm|0.7379|± |0.0307|
|agieval_sat_en_without_passage| 0|acc |0.4417|± |0.0347|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_math | 0|acc |0.4000|± |0.0331|
| | |acc_norm|0.3455|± |0.0321|
```
Average: 44.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6000|± |0.0356|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6585|± |0.0247|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2340|± |0.0224|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|± |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2057|± |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5367|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4040|± |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.4970|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7075|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4821|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2295|± |0.0133|
|bigbench_snarks | 0|multiple_choice_grade|0.6906|± |0.0345|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5375|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.6270|± |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2216|± |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1594|± |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5367|± |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
|
{"base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B", "datasets": ["teknium/OpenHermes-2.5"], "language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges"], "widget": [{"example_title": "Hermes 2 Pro Llama-3 Instruct Merge", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-Instruct-8B-Merge", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,369 |
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qqp
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-29T23:03:33Z |
2023-01-30T00:44:46+00:00
| 124 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qqp
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- type: accuracy
value: 0.663195646796933
name: Accuracy
- type: f1
value: 0.16465247530826327
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qqp
This model is a fine-tuned version of [gokuls/distilbert_sa_pre-training-complete](https://huggingface.co/gokuls/distilbert_sa_pre-training-complete) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5449
- Accuracy: 0.6632
- F1: 0.1647
- Combined Score: 0.4139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6004 | 1.0 | 1422 | 0.5643 | 0.6623 | 0.1630 | 0.4126 |
| 0.5393 | 2.0 | 2844 | 0.5498 | 0.6538 | 0.1199 | 0.3869 |
| 0.5157 | 3.0 | 4266 | 0.5449 | 0.6632 | 0.1647 | 0.4139 |
| 0.5007 | 4.0 | 5688 | 0.5512 | 0.6848 | 0.2663 | 0.4755 |
| 0.4914 | 5.0 | 7110 | 0.5501 | 0.6665 | 0.1817 | 0.4241 |
| 0.4847 | 6.0 | 8532 | 0.5475 | 0.6816 | 0.2517 | 0.4667 |
| 0.4803 | 7.0 | 9954 | 0.5478 | 0.6768 | 0.2301 | 0.4535 |
| 0.4768 | 8.0 | 11376 | 0.5488 | 0.6839 | 0.2610 | 0.4724 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qqp
This model is a fine-tuned version of [gokuls/distilbert_sa_pre-training-complete](https://huggingface.co/gokuls/distilbert_sa_pre-training-complete) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5449
- Accuracy: 0.6632
- F1: 0.1647
- Combined Score: 0.4139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6004 | 1.0 | 1422 | 0.5643 | 0.6623 | 0.1630 | 0.4126 |
| 0.5393 | 2.0 | 2844 | 0.5498 | 0.6538 | 0.1199 | 0.3869 |
| 0.5157 | 3.0 | 4266 | 0.5449 | 0.6632 | 0.1647 | 0.4139 |
| 0.5007 | 4.0 | 5688 | 0.5512 | 0.6848 | 0.2663 | 0.4755 |
| 0.4914 | 5.0 | 7110 | 0.5501 | 0.6665 | 0.1817 | 0.4241 |
| 0.4847 | 6.0 | 8532 | 0.5475 | 0.6816 | 0.2517 | 0.4667 |
| 0.4803 | 7.0 | 9954 | 0.5478 | 0.6768 | 0.2301 | 0.4535 |
| 0.4768 | 8.0 | 11376 | 0.5488 | 0.6839 | 0.2610 | 0.4724 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qqp", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE QQP", "type": "glue", "config": "qqp", "split": "validation", "args": "qqp"}, "metrics": [{"type": "accuracy", "value": 0.663195646796933, "name": "Accuracy"}, {"type": "f1", "value": 0.16465247530826327, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,370 |
iut62elec/llama3_2_1B_FT_new
|
iut62elec
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-03T18:53:04Z |
2024-10-03T18:57:12+00:00
| 15 | 0 |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with `transformers` and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
##
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Total | 830k | 86k | | 240 | 0 |
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 |
| Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 |
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 |
| | | Spanish | 41.5 | 55.1 | 62.5 |
| | | Italian | 39.8 | 53.8 | 61.6 |
| | | German | 39.2 | 53.3 | 60.6 |
| | | French | 40.5 | 54.6 | 62.3 |
| | | Hindi | 33.5 | 43.3 | 50.9 |
| | | Thai | 34.7 | 44.5 | 50.3 |
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
| null |
Non_BioNLP
|
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with `transformers` and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
##
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Total | 830k | 86k | | 240 | 0 |
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 |
| Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 |
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 |
| | | Spanish | 41.5 | 55.1 | 62.5 |
| | | Italian | 39.8 | 53.8 | 61.6 |
| | | German | 39.2 | 53.3 | 60.6 |
| | | French | 40.5 | 54.6 | 62.3 |
| | | Hindi | 33.5 | 43.3 | 50.9 |
| | | Thai | 34.7 | 44.5 | 50.3 |
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\n“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\n“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use. \ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law\n 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:\n 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled substances\n 11. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:\n 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 16. Generating, promoting, or further distributing spam\n 17. Impersonating another individual without consent, authorization, or legal right\n 18. Representing that the use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.\n\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
|
task
|
[
"SUMMARIZATION"
] | 45,371 |
GOMMM/t5-small-custom
|
GOMMM
| null |
[
"safetensors",
"t5",
"region:us"
] | 2024-09-19T15:58:11Z |
2024-09-19T15:58:19+00:00
| 5 | 0 |
---
{}
---
# Model Card for t5_small Summarization Model
## Model Details
This model is a fine-tuned version of t5_small for abstractive summarization tasks.
## Training Data
The model was trained on the CNN/Daily mail dataset.
## Training Procedure
- **Epochs**- : 1
- **Batch Size** : 4
- **Learning Rate** : 2e-5
- **Warmup Steps** : 500
- **Weight Decay** : 0.01
## How to Use
```Python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('google-t5/t5-small')
model = AutoModel.from_pretrained('google-t5/t5-small')
input_text = "(CNN)The Palestinian Authority officially became the 123rd member of the International Criminal Court on Wednesday, a step that gives the court jurisdiction over alleged crimes in Palestinian territories.
The formal accession was marked with a ceremony at The Hague, in the Netherlands, where the court is based.
The Palestinians signed the ICC's founding Rome Statute in January, when they also accepted its jurisdiction over alleged crimes committed "in the occupied Palestinian territory, including East Jerusalem, since June 13, 2014."
inputs = tokenizer.encode(input_text, return_tensors='pt')
max_chunk_length = 512
for i in range(0, len(inputs), max_chunk_length):
chunk = inputs[:, i:i+max_chunk_length]
chunks.append(chunk)
summary = ""
for chunk in chunks:
chunk_summary = model(tokenizer.decode(chunk[0]),
max_new_tokens=150,
min_length=10,
num_beams=3,
do_sample=True,
top_p=0.8)[0]['summary_text']
summary += chunk_summary + " "
print(summary)
```
## Evaluation
- **Rouge1**: 0.33
- **Rouge2**: 0.30
- **RougeL**: 0.33
- **BLEU1**: 60.00
- **BLEU2**: 55.56
- **BLEU4**: 42.86
## Limitations
The model may generate biased or inappropriate content due to the nature of the training data.
It is recommended to use the model with caution and apply necessary filters.
## Ethical Considerations
- **Bias** : The model may inherit biases present in the training data.
- **Misuse** : The model can be misused to generate misleading or harmful content.
| null |
Non_BioNLP
|
# Model Card for t5_small Summarization Model
## Model Details
This model is a fine-tuned version of t5_small for abstractive summarization tasks.
## Training Data
The model was trained on the CNN/Daily mail dataset.
## Training Procedure
- **Epochs**- : 1
- **Batch Size** : 4
- **Learning Rate** : 2e-5
- **Warmup Steps** : 500
- **Weight Decay** : 0.01
## How to Use
```Python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('google-t5/t5-small')
model = AutoModel.from_pretrained('google-t5/t5-small')
input_text = "(CNN)The Palestinian Authority officially became the 123rd member of the International Criminal Court on Wednesday, a step that gives the court jurisdiction over alleged crimes in Palestinian territories.
The formal accession was marked with a ceremony at The Hague, in the Netherlands, where the court is based.
The Palestinians signed the ICC's founding Rome Statute in January, when they also accepted its jurisdiction over alleged crimes committed "in the occupied Palestinian territory, including East Jerusalem, since June 13, 2014."
inputs = tokenizer.encode(input_text, return_tensors='pt')
max_chunk_length = 512
for i in range(0, len(inputs), max_chunk_length):
chunk = inputs[:, i:i+max_chunk_length]
chunks.append(chunk)
summary = ""
for chunk in chunks:
chunk_summary = model(tokenizer.decode(chunk[0]),
max_new_tokens=150,
min_length=10,
num_beams=3,
do_sample=True,
top_p=0.8)[0]['summary_text']
summary += chunk_summary + " "
print(summary)
```
## Evaluation
- **Rouge1**: 0.33
- **Rouge2**: 0.30
- **RougeL**: 0.33
- **BLEU1**: 60.00
- **BLEU2**: 55.56
- **BLEU4**: 42.86
## Limitations
The model may generate biased or inappropriate content due to the nature of the training data.
It is recommended to use the model with caution and apply necessary filters.
## Ethical Considerations
- **Bias** : The model may inherit biases present in the training data.
- **Misuse** : The model can be misused to generate misleading or harmful content.
|
{}
|
task
|
[
"SUMMARIZATION"
] | 45,372 |
gokuls/hbertv1-small-wt-48-emotion
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:gokuls/model_v1_complete_training_wt_init_48_small",
"base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_small",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-21T17:36:58Z |
2023-07-21T17:40:32+00:00
| 8 | 0 |
---
base_model: gokuls/model_v1_complete_training_wt_init_48_small
datasets:
- emotion
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: hbertv1-small-wt-48-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9375
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-small-wt-48-emotion
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_small](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_small) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1738
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.653 | 1.0 | 250 | 0.2924 | 0.8935 |
| 0.2315 | 2.0 | 500 | 0.2199 | 0.9175 |
| 0.1722 | 3.0 | 750 | 0.1918 | 0.9235 |
| 0.1263 | 4.0 | 1000 | 0.1738 | 0.9375 |
| 0.1087 | 5.0 | 1250 | 0.1898 | 0.9295 |
| 0.0889 | 6.0 | 1500 | 0.1812 | 0.932 |
| 0.0756 | 7.0 | 1750 | 0.1978 | 0.9315 |
| 0.0652 | 8.0 | 2000 | 0.2070 | 0.931 |
| 0.0506 | 9.0 | 2250 | 0.2277 | 0.9345 |
| 0.0398 | 10.0 | 2500 | 0.2356 | 0.9335 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-small-wt-48-emotion
This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_small](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_small) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1738
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.653 | 1.0 | 250 | 0.2924 | 0.8935 |
| 0.2315 | 2.0 | 500 | 0.2199 | 0.9175 |
| 0.1722 | 3.0 | 750 | 0.1918 | 0.9235 |
| 0.1263 | 4.0 | 1000 | 0.1738 | 0.9375 |
| 0.1087 | 5.0 | 1250 | 0.1898 | 0.9295 |
| 0.0889 | 6.0 | 1500 | 0.1812 | 0.932 |
| 0.0756 | 7.0 | 1750 | 0.1978 | 0.9315 |
| 0.0652 | 8.0 | 2000 | 0.2070 | 0.931 |
| 0.0506 | 9.0 | 2250 | 0.2277 | 0.9345 |
| 0.0398 | 10.0 | 2500 | 0.2356 | 0.9335 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
{"base_model": "gokuls/model_v1_complete_training_wt_init_48_small", "datasets": ["emotion"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hbertv1-small-wt-48-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9375, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,373 |
unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit
|
unsloth
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 2025-02-02T04:38:29Z |
2025-02-15T10:26:56+00:00
| 207,162 | 2 |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
language:
- en
library_name: transformers
license: llama3.1
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/llama-31-collection-6753dca76f47d9ce1696495f">our collection</a> for versions of Llama 3.1 including GGUF & 4-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Unsloth's <a href="https://unsloth.ai/blog/dynamic-4bit">Dynamic 4-bit Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">Fine-tune LLMs 2-5x faster with 70% less memory via Unsloth!</h2>
</div>
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
# unsloth/Llama-3.2-3B-unsloth-bnb-4bit
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
| null |
Non_BioNLP
|
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/llama-31-collection-6753dca76f47d9ce1696495f">our collection</a> for versions of Llama 3.1 including GGUF & 4-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Unsloth's <a href="https://unsloth.ai/blog/dynamic-4bit">Dynamic 4-bit Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">Fine-tune LLMs 2-5x faster with 70% less memory via Unsloth!</h2>
</div>
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
# unsloth/Llama-3.2-3B-unsloth-bnb-4bit
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
{"base_model": "meta-llama/Llama-3.1-8B-Instruct", "language": ["en"], "library_name": "transformers", "license": "llama3.1", "tags": ["llama-3", "llama", "meta", "facebook", "unsloth", "transformers"]}
|
task
|
[
"SUMMARIZATION"
] | 45,374 |
d0rj/e5-base-en-ru
|
d0rj
|
sentence-similarity
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"mteb",
"retrieval",
"retriever",
"pruned",
"e5",
"sentence-transformers",
"sentence-similarity",
"en",
"ru",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-09-21T08:46:33Z |
2024-04-12T15:07:36+00:00
| 381 | 8 |
---
language:
- en
- ru
library_name: transformers
license: mit
metrics:
- accuracy
- f1
- recall
pipeline_tag: sentence-similarity
tags:
- mteb
- retrieval
- retriever
- pruned
- e5
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# E5-base-en-ru
## Model info
This is vocabulary pruned version of [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base).
Uses only russian and english tokens.
### Size
| | intfloat/multilingual-e5-base | d0rj/e5-base-en-ru |
| --- | --- | --- |
| Model size (MB) | 1060.65 | 504.89 |
| Params (count) | 278,043,648 | 132,354,048 |
| Word embeddings dim | 192,001,536 | 46,311,936 |
### Performance
Performance on SberQuAD dev benchmark.
| Metric on SberQuAD (4122 questions) | intfloat/multilingual-e5-base | d0rj/e5-base-en-ru |
| --- | --- | --- |
| recall@3 | | |
| map@3 | | |
| mrr@3 | | |
| recall@5 | | |
| map@5 | | |
| mrr@5 | | |
| recall@10 | | |
| map@10 | | |
| mrr@10 | | |
## Usage
- Use **dot product** distance for retrieval.
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
### transformers
#### Direct usage
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import XLMRobertaTokenizer, XLMRobertaModel
def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
'query: How does a corporate website differ from a business card website?',
'query: Где был создан первый троллейбус?',
'passage: The first trolleybus was created in Germany by engineer Werner von Siemens, probably influenced by the idea of his brother, Dr. Wilhelm Siemens, who lived in England, expressed on May 18, 1881 at the twenty-second meeting of the Royal Scientific Society. The electrical circuit was carried out by an eight-wheeled cart (Kontaktwagen) rolling along two parallel contact wires. The wires were located quite close to each other, and in strong winds they often overlapped, which led to short circuits. An experimental trolleybus line with a length of 540 m (591 yards), opened by Siemens & Halske in the Berlin suburb of Halensee, operated from April 29 to June 13, 1882.',
'passage: Корпоративный сайт — содержит полную информацию о компании-владельце, услугах/продукции, событиях в жизни компании. Отличается от сайта-визитки и представительского сайта полнотой представленной информации, зачастую содержит различные функциональные инструменты для работы с контентом (поиск и фильтры, календари событий, фотогалереи, корпоративные блоги, форумы). Может быть интегрирован с внутренними информационными системами компании-владельца (КИС, CRM, бухгалтерскими системами). Может содержать закрытые разделы для тех или иных групп пользователей — сотрудников, дилеров, контрагентов и пр.',
]
tokenizer = XLMRobertaTokenizer.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
model = XLMRobertaModel.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[68.59542846679688, 81.75910949707031], [80.36100769042969, 64.77748107910156]]
```
#### Pipeline
```python
from transformers import pipeline
pipe = pipeline('feature-extraction', model='d0rj/e5-base-en-ru')
embeddings = pipe(input_texts, return_tensors=True)
embeddings[0].size()
# torch.Size([1, 17, 1024])
```
### sentence-transformers
```python
from sentence_transformers import SentenceTransformer
sentences = [
'query: Что такое круглые тензоры?',
'passage: Abstract: we introduce a novel method for compressing round tensors based on their inherent radial symmetry. We start by generalising PCA and eigen decomposition on round tensors...',
]
model = SentenceTransformer('d0rj/e5-base-en-ru')
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings.size()
# torch.Size([2, 1024])
```
| null |
Non_BioNLP
|
# E5-base-en-ru
## Model info
This is vocabulary pruned version of [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base).
Uses only russian and english tokens.
### Size
| | intfloat/multilingual-e5-base | d0rj/e5-base-en-ru |
| --- | --- | --- |
| Model size (MB) | 1060.65 | 504.89 |
| Params (count) | 278,043,648 | 132,354,048 |
| Word embeddings dim | 192,001,536 | 46,311,936 |
### Performance
Performance on SberQuAD dev benchmark.
| Metric on SberQuAD (4122 questions) | intfloat/multilingual-e5-base | d0rj/e5-base-en-ru |
| --- | --- | --- |
| recall@3 | | |
| map@3 | | |
| mrr@3 | | |
| recall@5 | | |
| map@5 | | |
| mrr@5 | | |
| recall@10 | | |
| map@10 | | |
| mrr@10 | | |
## Usage
- Use **dot product** distance for retrieval.
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
### transformers
#### Direct usage
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import XLMRobertaTokenizer, XLMRobertaModel
def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
'query: How does a corporate website differ from a business card website?',
'query: Где был создан первый троллейбус?',
'passage: The first trolleybus was created in Germany by engineer Werner von Siemens, probably influenced by the idea of his brother, Dr. Wilhelm Siemens, who lived in England, expressed on May 18, 1881 at the twenty-second meeting of the Royal Scientific Society. The electrical circuit was carried out by an eight-wheeled cart (Kontaktwagen) rolling along two parallel contact wires. The wires were located quite close to each other, and in strong winds they often overlapped, which led to short circuits. An experimental trolleybus line with a length of 540 m (591 yards), opened by Siemens & Halske in the Berlin suburb of Halensee, operated from April 29 to June 13, 1882.',
'passage: Корпоративный сайт — содержит полную информацию о компании-владельце, услугах/продукции, событиях в жизни компании. Отличается от сайта-визитки и представительского сайта полнотой представленной информации, зачастую содержит различные функциональные инструменты для работы с контентом (поиск и фильтры, календари событий, фотогалереи, корпоративные блоги, форумы). Может быть интегрирован с внутренними информационными системами компании-владельца (КИС, CRM, бухгалтерскими системами). Может содержать закрытые разделы для тех или иных групп пользователей — сотрудников, дилеров, контрагентов и пр.',
]
tokenizer = XLMRobertaTokenizer.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
model = XLMRobertaModel.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[68.59542846679688, 81.75910949707031], [80.36100769042969, 64.77748107910156]]
```
#### Pipeline
```python
from transformers import pipeline
pipe = pipeline('feature-extraction', model='d0rj/e5-base-en-ru')
embeddings = pipe(input_texts, return_tensors=True)
embeddings[0].size()
# torch.Size([1, 17, 1024])
```
### sentence-transformers
```python
from sentence_transformers import SentenceTransformer
sentences = [
'query: Что такое круглые тензоры?',
'passage: Abstract: we introduce a novel method for compressing round tensors based on their inherent radial symmetry. We start by generalising PCA and eigen decomposition on round tensors...',
]
model = SentenceTransformer('d0rj/e5-base-en-ru')
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings.size()
# torch.Size([2, 1024])
```
|
{"language": ["en", "ru"], "library_name": "transformers", "license": "mit", "metrics": ["accuracy", "f1", "recall"], "pipeline_tag": "sentence-similarity", "tags": ["mteb", "retrieval", "retriever", "pruned", "e5", "sentence-transformers", "feature-extraction", "sentence-similarity"]}
|
task
|
[
"SEMANTIC_SIMILARITY"
] | 45,375 |
gaudi/opus-mt-en-he-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-18T14:59:47Z |
2024-10-19T00:14:15+00:00
| 9 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-he)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-he).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-he --output_dir ./ctranslate2/opus-mt-en-he-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-he-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-he-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-he-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-he) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-he)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-he).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-he --output_dir ./ctranslate2/opus-mt-en-he-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-he-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-he-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-he-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-he) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 45,376 |
Tritkoman/EnglishtoOldEnglishV4
|
Tritkoman
|
translation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:Tritkoman/autotrain-data-oldenglish4",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-23T10:44:32Z |
2023-02-23T11:10:20+00:00
| 14 | 0 |
---
datasets:
- Tritkoman/autotrain-data-oldenglish4
language:
- unk
- unk
tags:
- autotrain
- translation
co2_eq_emissions:
emissions: 29.249758702505805
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 3681498294
- CO2 Emissions (in grams): 29.2498
## Validation Metrics
- Loss: 3.007
- SacreBLEU: 6.124
- Gen len: 19.114
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 3681498294
- CO2 Emissions (in grams): 29.2498
## Validation Metrics
- Loss: 3.007
- SacreBLEU: 6.124
- Gen len: 19.114
|
{"datasets": ["Tritkoman/autotrain-data-oldenglish4"], "language": ["unk", "unk"], "tags": ["autotrain", "translation"], "co2_eq_emissions": {"emissions": 29.249758702505805}}
|
task
|
[
"TRANSLATION"
] | 45,377 |
zanafi/my-awesome-setfit-model
|
zanafi
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-06-24T10:07:29Z |
2023-06-24T10:07:41+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# zanafi/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("zanafi/my-awesome-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# zanafi/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("zanafi/my-awesome-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,378 |
Huzaifa68/investment_v3
|
Huzaifa68
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:34",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Huzaifa68/investment_v1",
"base_model:finetune:Huzaifa68/investment_v1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-27T05:22:01Z |
2024-09-27T05:22:22+00:00
| 10 | 0 |
---
base_model: Huzaifa68/investment_v1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:34
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Fund Statistics
sentences:
- Investment Objective
- information
- statistics
- source_sentence: Investment Objective
sentences:
- Asset Allocation
- 1 Year - Rolling Returns
- objective
- source_sentence: Basic Information
sentences:
- Asset Allocation (as % of Total Assets)
- information
- investment committee
- source_sentence: Asset Allocation (as % of Total Assets)
sentences:
- investment objective
- Credit Quality (as % of Total Assets)
- Asset Allocation
- source_sentence: Asset Allocation (as % of Total Assets)
sentences:
- fund performance
- Asset Allocation
- 1 Year - Rolling Returns
---
# SentenceTransformer based on Huzaifa68/investment_v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Huzaifa68/investment_v1](https://huggingface.co/Huzaifa68/investment_v1) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Huzaifa68/investment_v1](https://huggingface.co/Huzaifa68/investment_v1) <!-- at revision 27d3f59a04e7b99eaaefce04446b00b7ca617c79 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Huzaifa68/investment_v3")
# Run inference
sentences = [
'Asset Allocation (as % of Total Assets)',
'Asset Allocation',
'1 Year - Rolling Returns',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 34 training samples
* Columns: <code>anchor</code>, <code>postive</code>, and <code>negative</code>
* Approximate statistics based on the first 34 samples:
| | anchor | postive | negative |
|:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 5.29 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.68 tokens</li><li>max: 4 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 6.76 tokens</li><li>max: 11 tokens</li></ul> |
* Samples:
| anchor | postive | negative |
|:----------------------------------|:----------------------------------|:-----------------------------------------------------|
| <code>Investment Objective</code> | <code>investment objective</code> | <code>Asset Allocation (as % of Total Assets)</code> |
| <code>Investment Objective</code> | <code>investment objective</code> | <code>Fund Statistics</code> |
| <code>Investment Objective</code> | <code>investment objective</code> | <code>Fund Performance</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.0
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Huzaifa68/investment_v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Huzaifa68/investment_v1](https://huggingface.co/Huzaifa68/investment_v1) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Huzaifa68/investment_v1](https://huggingface.co/Huzaifa68/investment_v1) <!-- at revision 27d3f59a04e7b99eaaefce04446b00b7ca617c79 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Huzaifa68/investment_v3")
# Run inference
sentences = [
'Asset Allocation (as % of Total Assets)',
'Asset Allocation',
'1 Year - Rolling Returns',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 34 training samples
* Columns: <code>anchor</code>, <code>postive</code>, and <code>negative</code>
* Approximate statistics based on the first 34 samples:
| | anchor | postive | negative |
|:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 5.29 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.68 tokens</li><li>max: 4 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 6.76 tokens</li><li>max: 11 tokens</li></ul> |
* Samples:
| anchor | postive | negative |
|:----------------------------------|:----------------------------------|:-----------------------------------------------------|
| <code>Investment Objective</code> | <code>investment objective</code> | <code>Asset Allocation (as % of Total Assets)</code> |
| <code>Investment Objective</code> | <code>investment objective</code> | <code>Fund Statistics</code> |
| <code>Investment Objective</code> | <code>investment objective</code> | <code>Fund Performance</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.0
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Huzaifa68/investment_v1", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:34", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Fund Statistics", "sentences": ["Investment Objective", "information", "statistics"]}, {"source_sentence": "Investment Objective", "sentences": ["Asset Allocation", "1 Year - Rolling Returns", "objective"]}, {"source_sentence": "Basic Information", "sentences": ["Asset Allocation (as % of Total Assets)", "information", "investment committee"]}, {"source_sentence": "Asset Allocation (as % of Total Assets)", "sentences": ["investment objective", "Credit Quality (as % of Total Assets)", "Asset Allocation"]}, {"source_sentence": "Asset Allocation (as % of Total Assets)", "sentences": ["fund performance", "Asset Allocation", "1 Year - Rolling Returns"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,379 |
dmitry-vorobiev/rubert_ria_headlines
|
dmitry-vorobiev
|
summarization
|
[
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"bert",
"rubert",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-05-09T14:56:55+00:00
| 1,397 | 2 |
---
language:
- ru
license: mit
tags:
- summarization
- bert
- rubert
---
# rubert_ria_headlines
## Description
*bert2bert* model, initialized with the `DeepPavlov/rubert-base-cased` pretrained weights and
fine-tuned on the first 99% of ["Rossiya Segodnya" news dataset](https://github.com/RossiyaSegodnya/ria_news_dataset) for 2 epochs.
## Usage example
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
MODEL_NAME = "dmitry-vorobiev/rubert_ria_headlines"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME)
text = "Скопируйте текст статьи / новости"
encoded_batch = tokenizer.prepare_seq2seq_batch(
[text],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512)
output_ids = model.generate(
input_ids=encoded_batch["input_ids"],
max_length=36,
no_repeat_ngram_size=3,
num_beams=5,
top_k=0
)
headline = tokenizer.decode(output_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=False)
print(headline)
```
## Datasets
- [ria_news](https://github.com/RossiyaSegodnya/ria_news_dataset)
## How it was trained?
I used free TPUv3 on kaggle. The model was trained for 3 epochs with effective batch size 192 and soft restarts (warmup steps 1500 / 500 / 500 with new optimizer state on each epoch start).
- [1 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53254694)
- [2 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53269040)
- [3 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53280797)
Common train params:
```shell
export XLA_USE_BF16=1
export XLA_TENSOR_ALLOCATOR_MAXSIZE=100000000
python nlp_headline_rus/src/train_seq2seq.py \
--do_train \
--tie_encoder_decoder \
--max_source_length 512 \
--max_target_length 32 \
--val_max_target_length 48 \
--tpu_num_cores 8 \
--per_device_train_batch_size 24 \
--gradient_accumulation_steps 1 \
--learning_rate 5e-4 \
--adam_epsilon 1e-6 \
--weight_decay 1e-5 \
```
## Validation results
- Using [last 1% of ria](https://drive.google.com/drive/folders/1ztAeyb1BiLMgXwOgOJS7WMR4PGiI1q92) dataset
- Using [gazeta_ru test](https://drive.google.com/drive/folders/1CyowuRpecsLTcDbqEfmAvkCWOod58g_e) split
- Using [gazeta_ru val](https://drive.google.com/drive/folders/1XZFOXHSXLKdhzm61ceVLw3aautrdskIu) split
| null |
Non_BioNLP
|
# rubert_ria_headlines
## Description
*bert2bert* model, initialized with the `DeepPavlov/rubert-base-cased` pretrained weights and
fine-tuned on the first 99% of ["Rossiya Segodnya" news dataset](https://github.com/RossiyaSegodnya/ria_news_dataset) for 2 epochs.
## Usage example
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
MODEL_NAME = "dmitry-vorobiev/rubert_ria_headlines"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME)
text = "Скопируйте текст статьи / новости"
encoded_batch = tokenizer.prepare_seq2seq_batch(
[text],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512)
output_ids = model.generate(
input_ids=encoded_batch["input_ids"],
max_length=36,
no_repeat_ngram_size=3,
num_beams=5,
top_k=0
)
headline = tokenizer.decode(output_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=False)
print(headline)
```
## Datasets
- [ria_news](https://github.com/RossiyaSegodnya/ria_news_dataset)
## How it was trained?
I used free TPUv3 on kaggle. The model was trained for 3 epochs with effective batch size 192 and soft restarts (warmup steps 1500 / 500 / 500 with new optimizer state on each epoch start).
- [1 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53254694)
- [2 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53269040)
- [3 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53280797)
Common train params:
```shell
export XLA_USE_BF16=1
export XLA_TENSOR_ALLOCATOR_MAXSIZE=100000000
python nlp_headline_rus/src/train_seq2seq.py \
--do_train \
--tie_encoder_decoder \
--max_source_length 512 \
--max_target_length 32 \
--val_max_target_length 48 \
--tpu_num_cores 8 \
--per_device_train_batch_size 24 \
--gradient_accumulation_steps 1 \
--learning_rate 5e-4 \
--adam_epsilon 1e-6 \
--weight_decay 1e-5 \
```
## Validation results
- Using [last 1% of ria](https://drive.google.com/drive/folders/1ztAeyb1BiLMgXwOgOJS7WMR4PGiI1q92) dataset
- Using [gazeta_ru test](https://drive.google.com/drive/folders/1CyowuRpecsLTcDbqEfmAvkCWOod58g_e) split
- Using [gazeta_ru val](https://drive.google.com/drive/folders/1XZFOXHSXLKdhzm61ceVLw3aautrdskIu) split
|
{"language": ["ru"], "license": "mit", "tags": ["summarization", "bert", "rubert"]}
|
task
|
[
"SUMMARIZATION"
] | 45,380 |
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_backtranslation-1
|
ThuyNT03
|
text-classification
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-04T21:37:50Z |
2023-09-07T06:43:03+00:00
| 7 | 0 |
---
base_model: xlm-roberta-base
license: mit
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_backtranslation-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_backtranslation-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3364
- Accuracy: 0.7
- F1: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9909 | 1.0 | 87 | 0.8850 | 0.6 | 0.5586 |
| 0.7303 | 2.0 | 174 | 0.6941 | 0.69 | 0.6767 |
| 0.5713 | 3.0 | 261 | 0.7149 | 0.73 | 0.7215 |
| 0.4254 | 4.0 | 348 | 0.6955 | 0.75 | 0.7492 |
| 0.331 | 5.0 | 435 | 0.9854 | 0.69 | 0.6737 |
| 0.2373 | 6.0 | 522 | 1.0423 | 0.7 | 0.6909 |
| 0.1995 | 7.0 | 609 | 1.2707 | 0.69 | 0.6806 |
| 0.1713 | 8.0 | 696 | 1.3364 | 0.7 | 0.6913 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_backtranslation-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3364
- Accuracy: 0.7
- F1: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9909 | 1.0 | 87 | 0.8850 | 0.6 | 0.5586 |
| 0.7303 | 2.0 | 174 | 0.6941 | 0.69 | 0.6767 |
| 0.5713 | 3.0 | 261 | 0.7149 | 0.73 | 0.7215 |
| 0.4254 | 4.0 | 348 | 0.6955 | 0.75 | 0.7492 |
| 0.331 | 5.0 | 435 | 0.9854 | 0.69 | 0.6737 |
| 0.2373 | 6.0 | 522 | 1.0423 | 0.7 | 0.6909 |
| 0.1995 | 7.0 | 609 | 1.2707 | 0.69 | 0.6806 |
| 0.1713 | 8.0 | 696 | 1.3364 | 0.7 | 0.6913 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "xlm-roberta-base", "license": "mit", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "xlm-roberta-base-Final_Mixed-aug_backtranslation-1", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,381 |
somosnlp/gemma-FULL-RAC-Colombia
|
somosnlp
|
text-generation
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"LLMs-Aviation",
"AI-Regulatory-Compliance",
"RAC-AI-Colombia",
"conversational",
"es",
"dataset:somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-03-17T13:01:44Z |
2024-03-17T16:35:33+00:00
| 16 | 15 |
---
datasets:
- somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma
language:
- es
library_name: transformers
license: apache-2.0
tags:
- LLMs-Aviation
- AI-Regulatory-Compliance
- RAC-AI-Colombia
widget:
- text: '<bos><start_of_turn>system
You are a helpful AI assistant.
Responde en formato json.
Eres un agente experto en la normativa aeronautica Colombiana.<end_of_turn>
<start_of_turn>user
¿Qué sucede con las empresas de servicios aéreos comerciales que no hayan actualizado
su permiso de operación después del 31 de marzo de 2024?<end_of_turn>
<start_of_turn>model'
---
# Model Card for Gemma-FULL-RAC-Colombia 🛫
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/6419c2f6b4adb0e101b17b6c/JlVz94BKdUkRN8xU00EgT.png" alt="Model Illustration" width="500">
</p>
🌟 This model card proudly presents Gemma-FULL-RAC-Colombia, a groundbreaking language model fine-tuned with precision on the Colombian Aeronautical Regulations (Reglamento Aeronáutico Colombiano, RAC) dataset. Harnessing the advanced capabilities of Google's original Gemma model, this adaptation focuses keenly on the aviation regulatory sphere within Colombia.
## Table of Contents
- [Model Card for Gemma-FULL-RAC-Colombia 🛫](#model-card-for-gemma-full-rac-colombia-)
* [Model Details 📈](#model-details-)
+ [Model Description 📘](#model-description-)
* [Uses 🛠️](#uses-)
+ [Direct Use 🎯](#direct-use-)
+ [Downstream Use [optional] 🔄](#downstream-use-optional-)
+ [Out-of-Scope Use 🚫](#out-of-scope-use-)
* [Bias, Risks, and Limitations ⚠️](#bias-risks-and-limitations-)
+ [Recommendations 📝](#recommendations-)
* [How to Get Started with the Model 🚀](#how-to-get-started-with-the-model-)
* [Training Details 🧠](#training-details-)
+ [Environment and Libraries 🖥️](#environment-and-libraries-)
+ [Model Architecture 🏗️](#model-architecture-)
+ [Fine-tuning with LoRA 🔧](#fine-tuning-with-lora-)
+ [Training Process 🚀](#training-process-)
+ [Training Data 📚](#training-data-)
+ [Training Procedure](#training-procedure)
* [Environmental Impact](#environmental-impact)
## Model Details 📈
### Model Description 📘
🚀 The Gemma-FULL-RAC-Colombia model marks an innovative leap in deploying language models for niche domains like aviation law and regulation. Developed by Edison Bejarano and Nicolai Potes, it's finely tuned using LoRA (Low-Rank Adaptation), ensuring minimal adjustment to the model's weights while preserving the extensive knowledge and prowess of Google's Gemma. This meticulous approach guarantees not just the retention of broad language comprehension and generation skills but also a profound grasp of the terminologies, concepts, and regulatory frameworks particular to Colombian aviation.
- **Developed by:** [Edison Bejarano](https://huggingface.co/ejbejaranos) and [Nicolai Potes](https://huggingface.co/NickyNicky) ✨
- **Funded by:** Fundación Universitaria Los Libertadores 🏛️
- **Shared by [optional]:** This project is a collective endeavor of passionate NLP and aviation regulation specialists, united by a vision to make aviation laws more accessible and understandable. 👥
- **Model type:** Causal Language Model, specially fine-tuned with LoRA for the distinct area of Colombian Aeronautical Regulations. 📊
- **Language(s) (NLP):** Spanish, tailored for the Colombian legal and regulatory context. 🇨🇴
- **License:** Apache License. This open-source license ensures that the model can be freely used, modified, and distributed. Please check the Hugging Face model page for specific licensing details. 🔓
- **Finetuned from model [optional]:** Derived from Google's Gemma model, utilizing versions such as `gemma-2b-it` for comprehensive training. 🧠
### Loss Function through Epochs 📉
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/6419c2f6b4adb0e101b17b6c/vufJ_u_5ZrdzNcSrqwjBx.png" alt="Loss Function Graph" width="500">
</p>
## Uses 🛠️
The Gemma-FULL-RAC-Colombia model is designed to enhance the understanding and application of the Colombian Aeronautical Regulations (RAC) through natural language processing. It's tailored for professionals and enthusiasts in the aviation industry, regulatory agencies, legal experts, and AI researchers with an interest in domain-specific language model applications.
### Direct Use 🎯
The model can be directly used to generate text, answer questions, and provide insights related to the Colombian Aeronautical Regulations without further fine-tuning. It's ideal for creating educational content, simplifying legal language, and assisting in regulatory compliance efforts.
### Downstream Use [optional] 🔄
When fine-tuned, Gemma-FULL-RAC-Colombia can be integrated into larger systems for automated compliance checks, document summarization, and even training simulators for pilots and air traffic controllers, offering a deeper, contextual understanding of regulations.
### Out-of-Scope Use 🚫
Misuse includes any application that promotes unethical practices, misinterprets aviation law, or uses the model for malicious purposes. The model is not designed for navigational purposes or to replace professional legal advice.
## Bias, Risks, and Limitations ⚠️
The model, while powerful, has limitations inherent to AI, including biases present in the training data. It may not cover all nuances of aviation regulations outside of Colombia or adapt to changes in law without updates.
### Recommendations 📝
Users should verify model outputs against current regulations and consult with professionals for critical applications. Awareness of the model's scope and limitations is crucial for effective use.
## How to Get Started with the Model 🚀
```python
# Assuming Hugging Face's Transformers and Datasets are installed
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "somosnlp/gemma-FULL-RAC-Colombia"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
input_text = "Explain the regulations for drone flights in Colombia."
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Training Details 🧠
The Gemma-FULL-RAC-Colombia model was specifically fine-tuned on the "Reglamento Aeronáutico Colombiano FULL ChatML format gemma" dataset available through Hugging Face Datasets. This dataset encompasses a comprehensive collection of the Colombian Aeronautical Regulations (RAC), tailored to train the model for understanding and generating responses related to aviation regulations in Colombia.
### Environment and Libraries 🖥️
The training was executed in a Python environment leveraging the following main libraries:
- `transformers` for model loading and fine-tuning facilities.
- `datasets` for handling the dataset efficiently.
- `torch` for the underlying deep learning framework.
- `peft` for applying the Low-Rank Adaptation (LoRA) adjustments to the model.
### Model Architecture 🏗️
The base model is Google's Gemma, an advanced language model with billions of parameters, capable of understanding context and generating human-like text. For this project, specific versions of Gemma (e.g., `gemma-2b-it`) were used as starting points.
### Fine-tuning with LoRA 🔧
LoRA (Low-Rank Adaptation) was employed to adapt the Gemma model to the specific domain of Colombian Aeronautical Regulations without extensively retraining the entire model. This approach allows for the efficient update of the model's weights, focusing on the adaptation layers to integrate the domain-specific knowledge effectively.
- **LoRA Configurations:** r=64, lora_alpha=32, among other settings tailored to balance performance and efficiency.
### Training Process 🚀
Training was carried out considering the following key parameters:
- Batch size: Adapted based on the available computational resources.
- Learning rate: Fine-tuned for optimal convergence without overfitting.
- Epochs: The model was trained for a sufficient number of epochs to ensure that it adequately learned from the dataset, with early stopping criteria to prevent overfitting.
### Evaluation and Performance 📊
'''Pending'''
### Model Merging and Saving 💾
After fine-tuning, the LoRA-adjusted weights were merged back with the base Gemma model to create the final version of Gemma-FULL-RAC-Colombia. The model was then saved and made available through Hugging Face, ensuring easy access for further use and development.
The training process was meticulously documented to ensure reproducibility and facilitate further research and development in the domain of specialized language models for regulatory compliance and legal tech applications.
### Training Data 📚
The training data utilized for fine-tuning the Gemma-FULL-RAC-Colombia model is available on Hugging Face Datasets under the dataset identifier:
[somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma](https://huggingface.co/datasets/somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma)
This dataset comprises a targeted collection of the Colombian Aeronautical Regulations (RAC), specifically including regulations from RAC 1 to RAC 5. The selection is optimized for chat and language modeling tasks, aiming to train the model to understand and generate accurate responses related to the initial segments of Colombian aviation regulations.
### Training Procedure
The Gemma-FULL-RAC-Colombia model underwent a meticulous training process, leveraging the NVIDIA V100 GPU for its computational needs. The procedure was designed to fine-tune the model on a dataset comprising sections RAC 1 to RAC 5 of the Colombian Aeronautical Regulations.
#### Preprocessing
The dataset was preprocessed to convert the regulatory texts into a format suitable for the model, utilizing the tokenizer associated with the `google/gemma-2b-it` model. This step ensured that the input data was optimized for training efficiency and model comprehension.
#### Training Hyperparameters
The training utilized a mixed precision regime to balance computational efficiency and model accuracy. Specific hyperparameters included:
- **Training regime:** Mixed precision (FP16)
#### Speeds, Sizes, Times
The model was trained over a span of approximately 4.67 hours (runtime: 16,020 seconds), with checkpoints indicating progress at various intervals. The training demonstrated efficient utilization of the V100 GPU, maintaining a steady throughput.
#### Factors
Training focused on the model's ability to understand and generate text related to the specific domain of Colombian Aeronautical Regulations.
#### Metrics
The primary metric for evaluating the model's performance was the training loss, which was observed to decrease steadily over time, indicating effective learning and adaptation by the model.
### Results
The training process resulted in a model capable of accurately interpreting and responding to queries related to the Colombian Aeronautical Regulations, with a notable final training loss of 0.6204 at the last checkpoint.
## Environmental Impact
Given the use of an NVIDIA V100 GPU for approximately 4.67 hours, the carbon emissions can be estimated using the Machine Learning Impact calculator. This tool accounts for the hardware type, runtime, and other factors to provide a comprehensive view of the environmental impact of training large AI models.
- **Hardware Type:** NVIDIA V100 GPU
- **Hours used:** ~4.67
- **Carbon Emitted:** 554.5625
# Constants
power_consumption_kW = 0.25 # 250 watts in kW
runtime_hours = 4.67
carbon_intensity_gCO2eq_per_kWh = 475 # Global average carbon intensity
# Calculate carbon emissions
carbon_emitted_gCO2eq = power_consumption_kW * runtime_hours * carbon_intensity_gCO2eq_per_kWh
carbon_emitted_gCO2eq = 554.5625
| null |
Non_BioNLP
|
# Model Card for Gemma-FULL-RAC-Colombia 🛫
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/6419c2f6b4adb0e101b17b6c/JlVz94BKdUkRN8xU00EgT.png" alt="Model Illustration" width="500">
</p>
🌟 This model card proudly presents Gemma-FULL-RAC-Colombia, a groundbreaking language model fine-tuned with precision on the Colombian Aeronautical Regulations (Reglamento Aeronáutico Colombiano, RAC) dataset. Harnessing the advanced capabilities of Google's original Gemma model, this adaptation focuses keenly on the aviation regulatory sphere within Colombia.
## Table of Contents
- [Model Card for Gemma-FULL-RAC-Colombia 🛫](#model-card-for-gemma-full-rac-colombia-)
* [Model Details 📈](#model-details-)
+ [Model Description 📘](#model-description-)
* [Uses 🛠️](#uses-)
+ [Direct Use 🎯](#direct-use-)
+ [Downstream Use [optional] 🔄](#downstream-use-optional-)
+ [Out-of-Scope Use 🚫](#out-of-scope-use-)
* [Bias, Risks, and Limitations ⚠️](#bias-risks-and-limitations-)
+ [Recommendations 📝](#recommendations-)
* [How to Get Started with the Model 🚀](#how-to-get-started-with-the-model-)
* [Training Details 🧠](#training-details-)
+ [Environment and Libraries 🖥️](#environment-and-libraries-)
+ [Model Architecture 🏗️](#model-architecture-)
+ [Fine-tuning with LoRA 🔧](#fine-tuning-with-lora-)
+ [Training Process 🚀](#training-process-)
+ [Training Data 📚](#training-data-)
+ [Training Procedure](#training-procedure)
* [Environmental Impact](#environmental-impact)
## Model Details 📈
### Model Description 📘
🚀 The Gemma-FULL-RAC-Colombia model marks an innovative leap in deploying language models for niche domains like aviation law and regulation. Developed by Edison Bejarano and Nicolai Potes, it's finely tuned using LoRA (Low-Rank Adaptation), ensuring minimal adjustment to the model's weights while preserving the extensive knowledge and prowess of Google's Gemma. This meticulous approach guarantees not just the retention of broad language comprehension and generation skills but also a profound grasp of the terminologies, concepts, and regulatory frameworks particular to Colombian aviation.
- **Developed by:** [Edison Bejarano](https://huggingface.co/ejbejaranos) and [Nicolai Potes](https://huggingface.co/NickyNicky) ✨
- **Funded by:** Fundación Universitaria Los Libertadores 🏛️
- **Shared by [optional]:** This project is a collective endeavor of passionate NLP and aviation regulation specialists, united by a vision to make aviation laws more accessible and understandable. 👥
- **Model type:** Causal Language Model, specially fine-tuned with LoRA for the distinct area of Colombian Aeronautical Regulations. 📊
- **Language(s) (NLP):** Spanish, tailored for the Colombian legal and regulatory context. 🇨🇴
- **License:** Apache License. This open-source license ensures that the model can be freely used, modified, and distributed. Please check the Hugging Face model page for specific licensing details. 🔓
- **Finetuned from model [optional]:** Derived from Google's Gemma model, utilizing versions such as `gemma-2b-it` for comprehensive training. 🧠
### Loss Function through Epochs 📉
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/6419c2f6b4adb0e101b17b6c/vufJ_u_5ZrdzNcSrqwjBx.png" alt="Loss Function Graph" width="500">
</p>
## Uses 🛠️
The Gemma-FULL-RAC-Colombia model is designed to enhance the understanding and application of the Colombian Aeronautical Regulations (RAC) through natural language processing. It's tailored for professionals and enthusiasts in the aviation industry, regulatory agencies, legal experts, and AI researchers with an interest in domain-specific language model applications.
### Direct Use 🎯
The model can be directly used to generate text, answer questions, and provide insights related to the Colombian Aeronautical Regulations without further fine-tuning. It's ideal for creating educational content, simplifying legal language, and assisting in regulatory compliance efforts.
### Downstream Use [optional] 🔄
When fine-tuned, Gemma-FULL-RAC-Colombia can be integrated into larger systems for automated compliance checks, document summarization, and even training simulators for pilots and air traffic controllers, offering a deeper, contextual understanding of regulations.
### Out-of-Scope Use 🚫
Misuse includes any application that promotes unethical practices, misinterprets aviation law, or uses the model for malicious purposes. The model is not designed for navigational purposes or to replace professional legal advice.
## Bias, Risks, and Limitations ⚠️
The model, while powerful, has limitations inherent to AI, including biases present in the training data. It may not cover all nuances of aviation regulations outside of Colombia or adapt to changes in law without updates.
### Recommendations 📝
Users should verify model outputs against current regulations and consult with professionals for critical applications. Awareness of the model's scope and limitations is crucial for effective use.
## How to Get Started with the Model 🚀
```python
# Assuming Hugging Face's Transformers and Datasets are installed
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "somosnlp/gemma-FULL-RAC-Colombia"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
input_text = "Explain the regulations for drone flights in Colombia."
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Training Details 🧠
The Gemma-FULL-RAC-Colombia model was specifically fine-tuned on the "Reglamento Aeronáutico Colombiano FULL ChatML format gemma" dataset available through Hugging Face Datasets. This dataset encompasses a comprehensive collection of the Colombian Aeronautical Regulations (RAC), tailored to train the model for understanding and generating responses related to aviation regulations in Colombia.
### Environment and Libraries 🖥️
The training was executed in a Python environment leveraging the following main libraries:
- `transformers` for model loading and fine-tuning facilities.
- `datasets` for handling the dataset efficiently.
- `torch` for the underlying deep learning framework.
- `peft` for applying the Low-Rank Adaptation (LoRA) adjustments to the model.
### Model Architecture 🏗️
The base model is Google's Gemma, an advanced language model with billions of parameters, capable of understanding context and generating human-like text. For this project, specific versions of Gemma (e.g., `gemma-2b-it`) were used as starting points.
### Fine-tuning with LoRA 🔧
LoRA (Low-Rank Adaptation) was employed to adapt the Gemma model to the specific domain of Colombian Aeronautical Regulations without extensively retraining the entire model. This approach allows for the efficient update of the model's weights, focusing on the adaptation layers to integrate the domain-specific knowledge effectively.
- **LoRA Configurations:** r=64, lora_alpha=32, among other settings tailored to balance performance and efficiency.
### Training Process 🚀
Training was carried out considering the following key parameters:
- Batch size: Adapted based on the available computational resources.
- Learning rate: Fine-tuned for optimal convergence without overfitting.
- Epochs: The model was trained for a sufficient number of epochs to ensure that it adequately learned from the dataset, with early stopping criteria to prevent overfitting.
### Evaluation and Performance 📊
'''Pending'''
### Model Merging and Saving 💾
After fine-tuning, the LoRA-adjusted weights were merged back with the base Gemma model to create the final version of Gemma-FULL-RAC-Colombia. The model was then saved and made available through Hugging Face, ensuring easy access for further use and development.
The training process was meticulously documented to ensure reproducibility and facilitate further research and development in the domain of specialized language models for regulatory compliance and legal tech applications.
### Training Data 📚
The training data utilized for fine-tuning the Gemma-FULL-RAC-Colombia model is available on Hugging Face Datasets under the dataset identifier:
[somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma](https://huggingface.co/datasets/somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma)
This dataset comprises a targeted collection of the Colombian Aeronautical Regulations (RAC), specifically including regulations from RAC 1 to RAC 5. The selection is optimized for chat and language modeling tasks, aiming to train the model to understand and generate accurate responses related to the initial segments of Colombian aviation regulations.
### Training Procedure
The Gemma-FULL-RAC-Colombia model underwent a meticulous training process, leveraging the NVIDIA V100 GPU for its computational needs. The procedure was designed to fine-tune the model on a dataset comprising sections RAC 1 to RAC 5 of the Colombian Aeronautical Regulations.
#### Preprocessing
The dataset was preprocessed to convert the regulatory texts into a format suitable for the model, utilizing the tokenizer associated with the `google/gemma-2b-it` model. This step ensured that the input data was optimized for training efficiency and model comprehension.
#### Training Hyperparameters
The training utilized a mixed precision regime to balance computational efficiency and model accuracy. Specific hyperparameters included:
- **Training regime:** Mixed precision (FP16)
#### Speeds, Sizes, Times
The model was trained over a span of approximately 4.67 hours (runtime: 16,020 seconds), with checkpoints indicating progress at various intervals. The training demonstrated efficient utilization of the V100 GPU, maintaining a steady throughput.
#### Factors
Training focused on the model's ability to understand and generate text related to the specific domain of Colombian Aeronautical Regulations.
#### Metrics
The primary metric for evaluating the model's performance was the training loss, which was observed to decrease steadily over time, indicating effective learning and adaptation by the model.
### Results
The training process resulted in a model capable of accurately interpreting and responding to queries related to the Colombian Aeronautical Regulations, with a notable final training loss of 0.6204 at the last checkpoint.
## Environmental Impact
Given the use of an NVIDIA V100 GPU for approximately 4.67 hours, the carbon emissions can be estimated using the Machine Learning Impact calculator. This tool accounts for the hardware type, runtime, and other factors to provide a comprehensive view of the environmental impact of training large AI models.
- **Hardware Type:** NVIDIA V100 GPU
- **Hours used:** ~4.67
- **Carbon Emitted:** 554.5625
# Constants
power_consumption_kW = 0.25 # 250 watts in kW
runtime_hours = 4.67
carbon_intensity_gCO2eq_per_kWh = 475 # Global average carbon intensity
# Calculate carbon emissions
carbon_emitted_gCO2eq = power_consumption_kW * runtime_hours * carbon_intensity_gCO2eq_per_kWh
carbon_emitted_gCO2eq = 554.5625
|
{"datasets": ["somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma"], "language": ["es"], "library_name": "transformers", "license": "apache-2.0", "tags": ["LLMs-Aviation", "AI-Regulatory-Compliance", "RAC-AI-Colombia"], "widget": [{"text": "<bos><start_of_turn>system\nYou are a helpful AI assistant.\nResponde en formato json.\nEres un agente experto en la normativa aeronautica Colombiana.<end_of_turn>\n<start_of_turn>user\n¿Qué sucede con las empresas de servicios aéreos comerciales que no hayan actualizado su permiso de operación después del 31 de marzo de 2024?<end_of_turn>\n<start_of_turn>model"}]}
|
task
|
[
"SUMMARIZATION"
] | 45,382 |
flax-sentence-embeddings/multi-qa_v1-distilbert-cls_dot
|
flax-sentence-embeddings
|
sentence-similarity
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"arxiv:2102.07033",
"arxiv:2104.08727",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-07-26T01:34:32+00:00
| 153 | 0 |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# multi-qa_v1-distilbert-cls_dot
## Model Description
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. For this model, cls output was used instead of mean pooling as sentence embeddings. Dot product was used to calculate similarity for learning objective.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/multi-qa_v1-distilbert-cls_dot')
text = "Replace me by any question / answer you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| SearchQA | - | 582,261 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| null |
Non_BioNLP
|
# multi-qa_v1-distilbert-cls_dot
## Model Description
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. For this model, cls output was used instead of mean pooling as sentence embeddings. Dot product was used to calculate similarity for learning objective.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/multi-qa_v1-distilbert-cls_dot')
text = "Replace me by any question / answer you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| SearchQA | - | 582,261 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
|
{"pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,383 |
sobamchan/st5-base-mean-300
|
sobamchan
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"t5",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-26T11:26:35Z |
2025-02-26T11:27:25+00:00
| 9 | 0 |
---
base_model: google-t5/t5-base
datasets:
- sentence-transformers/all-nli
language:
- en
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: A man is jumping unto his filthy bed.
sentences:
- A young male is looking at a newspaper while 2 females walks past him.
- The bed is dirty.
- The man is on the moon.
- source_sentence: A carefully balanced male stands on one foot near a clean ocean
beach area.
sentences:
- A man is ouside near the beach.
- Three policemen patrol the streets on bikes
- A man is sitting on his couch.
- source_sentence: The man is wearing a blue shirt.
sentences:
- Near the trashcan the man stood and smoked
- A man in a blue shirt leans on a wall beside a road with a blue van and red car
with water in the background.
- A man in a black shirt is playing a guitar.
- source_sentence: The girls are outdoors.
sentences:
- Two girls riding on an amusement part ride.
- a guy laughs while doing laundry
- Three girls are standing together in a room, one is listening, one is writing
on a wall and the third is talking to them.
- source_sentence: A construction worker peeking out of a manhole while his coworker
sits on the sidewalk smiling.
sentences:
- A worker is looking out of a manhole.
- A man is giving a presentation.
- The workers are both inside the manhole.
---
# SentenceTransformer based on google-t5/t5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) <!-- at revision a9723ea7f1b39c1eae772870f3b547bf6ef7e6c1 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.96 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.79 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.02 tokens</li><li>max: 57 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 19.41 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.69 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.35 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0011 | 10 | - | 1.8733 |
| 0.0023 | 20 | - | 1.8726 |
| 0.0034 | 30 | - | 1.8714 |
| 0.0046 | 40 | - | 1.8697 |
| 0.0057 | 50 | - | 1.8675 |
| 0.0069 | 60 | - | 1.8649 |
| 0.0080 | 70 | - | 1.8619 |
| 0.0092 | 80 | - | 1.8584 |
| 0.0103 | 90 | - | 1.8544 |
| 0.0115 | 100 | 3.1046 | 1.8499 |
| 0.0126 | 110 | - | 1.8451 |
| 0.0138 | 120 | - | 1.8399 |
| 0.0149 | 130 | - | 1.8343 |
| 0.0161 | 140 | - | 1.8283 |
| 0.0172 | 150 | - | 1.8223 |
| 0.0184 | 160 | - | 1.8159 |
| 0.0195 | 170 | - | 1.8091 |
| 0.0206 | 180 | - | 1.8016 |
| 0.0218 | 190 | - | 1.7938 |
| 0.0229 | 200 | 3.0303 | 1.7858 |
| 0.0241 | 210 | - | 1.7775 |
| 0.0252 | 220 | - | 1.7693 |
| 0.0264 | 230 | - | 1.7605 |
| 0.0275 | 240 | - | 1.7514 |
| 0.0287 | 250 | - | 1.7417 |
| 0.0298 | 260 | - | 1.7320 |
| 0.0310 | 270 | - | 1.7227 |
| 0.0321 | 280 | - | 1.7134 |
| 0.0333 | 290 | - | 1.7040 |
| 0.0344 | 300 | 2.9459 | 1.6941 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.0+cu121
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on google-t5/t5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) <!-- at revision a9723ea7f1b39c1eae772870f3b547bf6ef7e6c1 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.96 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.79 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.02 tokens</li><li>max: 57 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 19.41 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.69 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.35 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0011 | 10 | - | 1.8733 |
| 0.0023 | 20 | - | 1.8726 |
| 0.0034 | 30 | - | 1.8714 |
| 0.0046 | 40 | - | 1.8697 |
| 0.0057 | 50 | - | 1.8675 |
| 0.0069 | 60 | - | 1.8649 |
| 0.0080 | 70 | - | 1.8619 |
| 0.0092 | 80 | - | 1.8584 |
| 0.0103 | 90 | - | 1.8544 |
| 0.0115 | 100 | 3.1046 | 1.8499 |
| 0.0126 | 110 | - | 1.8451 |
| 0.0138 | 120 | - | 1.8399 |
| 0.0149 | 130 | - | 1.8343 |
| 0.0161 | 140 | - | 1.8283 |
| 0.0172 | 150 | - | 1.8223 |
| 0.0184 | 160 | - | 1.8159 |
| 0.0195 | 170 | - | 1.8091 |
| 0.0206 | 180 | - | 1.8016 |
| 0.0218 | 190 | - | 1.7938 |
| 0.0229 | 200 | 3.0303 | 1.7858 |
| 0.0241 | 210 | - | 1.7775 |
| 0.0252 | 220 | - | 1.7693 |
| 0.0264 | 230 | - | 1.7605 |
| 0.0275 | 240 | - | 1.7514 |
| 0.0287 | 250 | - | 1.7417 |
| 0.0298 | 260 | - | 1.7320 |
| 0.0310 | 270 | - | 1.7227 |
| 0.0321 | 280 | - | 1.7134 |
| 0.0333 | 290 | - | 1.7040 |
| 0.0344 | 300 | 2.9459 | 1.6941 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.0+cu121
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "google-t5/t5-base", "datasets": ["sentence-transformers/all-nli"], "language": ["en"], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:557850", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "A man is jumping unto his filthy bed.", "sentences": ["A young male is looking at a newspaper while 2 females walks past him.", "The bed is dirty.", "The man is on the moon."]}, {"source_sentence": "A carefully balanced male stands on one foot near a clean ocean beach area.", "sentences": ["A man is ouside near the beach.", "Three policemen patrol the streets on bikes", "A man is sitting on his couch."]}, {"source_sentence": "The man is wearing a blue shirt.", "sentences": ["Near the trashcan the man stood and smoked", "A man in a blue shirt leans on a wall beside a road with a blue van and red car with water in the background.", "A man in a black shirt is playing a guitar."]}, {"source_sentence": "The girls are outdoors.", "sentences": ["Two girls riding on an amusement part ride.", "a guy laughs while doing laundry", "Three girls are standing together in a room, one is listening, one is writing on a wall and the third is talking to them."]}, {"source_sentence": "A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.", "sentences": ["A worker is looking out of a manhole.", "A man is giving a presentation.", "The workers are both inside the manhole."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,384 |
versae/gzipbert_imdb_rpe_250k
|
versae
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roformer",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:versae/gzip-bert",
"base_model:finetune:versae/gzip-bert",
"license:openrail",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-25T14:52:12Z |
2025-01-15T11:20:22+00:00
| 18 | 0 |
---
base_model: versae/gzip-bert
datasets:
- imdb
license: openrail
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: gzipbert_imdb_rpe_250k
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.50952
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gzipbert_imdb_rpe_250k
This model is a fine-tuned version of [versae/gzip-bert](https://huggingface.co/versae/gzip-bert) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0866
- Accuracy: 0.5095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.003 | 1.0 | 1563 | 5.1727 | 0.5548 |
| 0.0061 | 2.0 | 3126 | 5.7975 | 0.5176 |
| 0.0056 | 3.0 | 4689 | 5.6762 | 0.5107 |
| 0.0019 | 4.0 | 6252 | 6.0355 | 0.5082 |
| 0.0043 | 5.0 | 7815 | 6.0866 | 0.5095 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gzipbert_imdb_rpe_250k
This model is a fine-tuned version of [versae/gzip-bert](https://huggingface.co/versae/gzip-bert) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0866
- Accuracy: 0.5095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.003 | 1.0 | 1563 | 5.1727 | 0.5548 |
| 0.0061 | 2.0 | 3126 | 5.7975 | 0.5176 |
| 0.0056 | 3.0 | 4689 | 5.6762 | 0.5107 |
| 0.0019 | 4.0 | 6252 | 6.0355 | 0.5082 |
| 0.0043 | 5.0 | 7815 | 6.0866 | 0.5095 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
{"base_model": "versae/gzip-bert", "datasets": ["imdb"], "license": "openrail", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "gzipbert_imdb_rpe_250k", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.50952, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,385 |
mini1013/master_cate_fi6
|
mini1013
|
text-classification
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | 2025-01-21T19:51:06Z |
2025-01-21T19:51:28+00:00
| 174 | 0 |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 탑키드 만들기 경찰관 놀이 세트 3인용 가구/인테리어>수예>기타수예
- text: 일상공방 코 손뜨개 6종세트 인디핑크 421114 가구/인테리어>수예>뜨개질>완제품
- text: 퀼트가게6마 반폭롤 면 100 20수 도기 프렌즈 WS 792 원단 가구/인테리어>수예>퀼트/펠트>원단
- text: 펠트 구절초 대 SET 환경꾸미기재료 가구/인테리어>수예>퀼트/펠트>도안
- text: 광목침구 촬영용 빈티지 플라워 코튼 포플린 드레스 셔츠 섬유 린넨 대폭원단 가구/인테리어>수예>자수>원단
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 1.0
name: Accuracy
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 6.0 | <ul><li>'퀼트크로스백 퀼트완제품 가구/인테리어>수예>퀼트/펠트>완제품'</li><li>'현진 글리터토퍼 꽃길만걷자 GFT4-405 152094 가구/인테리어>수예>퀼트/펠트>완제품'</li><li>'스위티퀼트 퀼트 완제품 봄이 필통 파우치 가구/인테리어>수예>퀼트/펠트>완제품'</li></ul> |
| 4.0 | <ul><li>'쇼파 빈티지 요곤 가죽 질감 재킷 배경 가방 부드러운 안감천 레자원단 가구/인테리어>수예>원단'</li><li>'접착 레자 소파 고무 강력 자동차 인테리 인조 가죽 가구/인테리어>수예>원단'</li><li>'핸드메이드 가죽 소재 하운드투스 Y자 인조 PVC클러치 프린트 캐리어 DIY 가구/인테리어>수예>원단'</li></ul> |
| 3.0 | <ul><li>'실밥뜯개 실뜯게 제거기 부자재 니퍼 실따개 바느질 마대바늘 모루인형눈 스킬바늘 재단가위 가구/인테리어>수예>수예용품/부자재'</li><li>'diy 가죽공예 세트 왁스실 가죽바늘 7종 가구/인테리어>수예>수예용품/부자재'</li><li>'단추 썬그립 500세트 똑딱이단추 고급 국산 티단추 스냅 선그립 79컬러 가구/인테리어>수예>수예용품/부자재'</li></ul> |
| 1.0 | <ul><li>'도서 다루마 패턴북 6 가구/인테리어>수예>뜨개질>패키지'</li><li>'뜨개가방손잡이 우드 자연 큰 단단한 나무 잠금 가구/인테리어>수예>뜨개질>완제품'</li><li>'타월 담요 소파 손뜨개 여름 블랭킷 커버 코바늘 가구/인테리어>수예>뜨개질>완제품'</li></ul> |
| 0.0 | <ul><li>'우돌아트 동물이름표 기린 네임텍 스텐실 도안 1243 가구/인테리어>수예>기타수예'</li><li>'모루 공예 재료 부드러운 모루 - 초록 가구/인테리어>수예>기타수예'</li><li>'컬러점토 3개입 아모스 가구/인테리어>수예>기타수예'</li></ul> |
| 5.0 | <ul><li>'OOE 덴마크 꽃실 자수실 510 727 가구/인테리어>수예>자수>실/바늘'</li><li>'데코샌드아트 명화도안 색모래 밤의 별매 중 X 2매입 가구/인테리어>수예>자수>도안'</li><li>'실십자수 동물 왕 사자 가족 대형 십자수 세트 패키지 DIY만들기 30x40 11CT HMA56704 가구/인테리어>수예>자수>패키지'</li></ul> |
| 2.0 | <ul><li>'누니액자 보석십자수 패브릭형 액자 60x90 프리미엄 클래식실버 가구/인테리어>수예>보석십자수'</li><li>'돈그림 황금돈나무 거실 현관 행운의 풍수 금전운 그-D 40x80 가구/인테리어>수예>보석십자수'</li><li>'보석십자수 빗 가구/인테리어>수예>보석십자수'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_fi6")
# Run inference
preds = model("탑키드 만들기 경찰관 놀이 세트 3인용 가구/인테리어>수예>기타수예")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 2 | 8.8714 | 24 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 70 |
| 1.0 | 70 |
| 2.0 | 70 |
| 3.0 | 70 |
| 4.0 | 70 |
| 5.0 | 70 |
| 6.0 | 70 |
### Training Hyperparameters
- batch_size: (256, 256)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 50
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0104 | 1 | 0.5007 | - |
| 0.5208 | 50 | 0.4969 | - |
| 1.0417 | 100 | 0.4332 | - |
| 1.5625 | 150 | 0.0551 | - |
| 2.0833 | 200 | 0.0001 | - |
| 2.6042 | 250 | 0.0 | - |
| 3.125 | 300 | 0.0 | - |
| 3.6458 | 350 | 0.0 | - |
| 4.1667 | 400 | 0.0 | - |
| 4.6875 | 450 | 0.0 | - |
| 5.2083 | 500 | 0.0 | - |
| 5.7292 | 550 | 0.0 | - |
| 6.25 | 600 | 0.0 | - |
| 6.7708 | 650 | 0.0 | - |
| 7.2917 | 700 | 0.0 | - |
| 7.8125 | 750 | 0.0 | - |
| 8.3333 | 800 | 0.0 | - |
| 8.8542 | 850 | 0.0 | - |
| 9.375 | 900 | 0.0 | - |
| 9.8958 | 950 | 0.0 | - |
| 10.4167 | 1000 | 0.0 | - |
| 10.9375 | 1050 | 0.0 | - |
| 11.4583 | 1100 | 0.0 | - |
| 11.9792 | 1150 | 0.0 | - |
| 12.5 | 1200 | 0.0 | - |
| 13.0208 | 1250 | 0.0 | - |
| 13.5417 | 1300 | 0.0 | - |
| 14.0625 | 1350 | 0.0 | - |
| 14.5833 | 1400 | 0.0 | - |
| 15.1042 | 1450 | 0.0 | - |
| 15.625 | 1500 | 0.0 | - |
| 16.1458 | 1550 | 0.0 | - |
| 16.6667 | 1600 | 0.0 | - |
| 17.1875 | 1650 | 0.0 | - |
| 17.7083 | 1700 | 0.0 | - |
| 18.2292 | 1750 | 0.0 | - |
| 18.75 | 1800 | 0.0 | - |
| 19.2708 | 1850 | 0.0 | - |
| 19.7917 | 1900 | 0.0 | - |
| 20.3125 | 1950 | 0.0 | - |
| 20.8333 | 2000 | 0.0 | - |
| 21.3542 | 2050 | 0.0 | - |
| 21.875 | 2100 | 0.0 | - |
| 22.3958 | 2150 | 0.0 | - |
| 22.9167 | 2200 | 0.0 | - |
| 23.4375 | 2250 | 0.0 | - |
| 23.9583 | 2300 | 0.0 | - |
| 24.4792 | 2350 | 0.0 | - |
| 25.0 | 2400 | 0.0 | - |
| 25.5208 | 2450 | 0.0 | - |
| 26.0417 | 2500 | 0.0 | - |
| 26.5625 | 2550 | 0.0 | - |
| 27.0833 | 2600 | 0.0 | - |
| 27.6042 | 2650 | 0.0 | - |
| 28.125 | 2700 | 0.0 | - |
| 28.6458 | 2750 | 0.0 | - |
| 29.1667 | 2800 | 0.0 | - |
| 29.6875 | 2850 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 6.0 | <ul><li>'퀼트크로스백 퀼트완제품 가구/인테리어>수예>퀼트/펠트>완제품'</li><li>'현진 글리터토퍼 꽃길만걷자 GFT4-405 152094 가구/인테리어>수예>퀼트/펠트>완제품'</li><li>'스위티퀼트 퀼트 완제품 봄이 필통 파우치 가구/인테리어>수예>퀼트/펠트>완제품'</li></ul> |
| 4.0 | <ul><li>'쇼파 빈티지 요곤 가죽 질감 재킷 배경 가방 부드러운 안감천 레자원단 가구/인테리어>수예>원단'</li><li>'접착 레자 소파 고무 강력 자동차 인테리 인조 가죽 가구/인테리어>수예>원단'</li><li>'핸드메이드 가죽 소재 하운드투스 Y자 인조 PVC클러치 프린트 캐리어 DIY 가구/인테리어>수예>원단'</li></ul> |
| 3.0 | <ul><li>'실밥뜯개 실뜯게 제거기 부자재 니퍼 실따개 바느질 마대바늘 모루인형눈 스킬바늘 재단가위 가구/인테리어>수예>수예용품/부자재'</li><li>'diy 가죽공예 세트 왁스실 가죽바늘 7종 가구/인테리어>수예>수예용품/부자재'</li><li>'단추 썬그립 500세트 똑딱이단추 고급 국산 티단추 스냅 선그립 79컬러 가구/인테리어>수예>수예용품/부자재'</li></ul> |
| 1.0 | <ul><li>'도서 다루마 패턴북 6 가구/인테리어>수예>뜨개질>패키지'</li><li>'뜨개가방손잡이 우드 자연 큰 단단한 나무 잠금 가구/인테리어>수예>뜨개질>완제품'</li><li>'타월 담요 소파 손뜨개 여름 블랭킷 커버 코바늘 가구/인테리어>수예>뜨개질>완제품'</li></ul> |
| 0.0 | <ul><li>'우돌아트 동물이름표 기린 네임텍 스텐실 도안 1243 가구/인테리어>수예>기타수예'</li><li>'모루 공예 재료 부드러운 모루 - 초록 가구/인테리어>수예>기타수예'</li><li>'컬러점토 3개입 아모스 가구/인테리어>수예>기타수예'</li></ul> |
| 5.0 | <ul><li>'OOE 덴마크 꽃실 자수실 510 727 가구/인테리어>수예>자수>실/바늘'</li><li>'데코샌드아트 명화도안 색모래 밤의 별매 중 X 2매입 가구/인테리어>수예>자수>도안'</li><li>'실십자수 동물 왕 사자 가족 대형 십자수 세트 패키지 DIY만들기 30x40 11CT HMA56704 가구/인테리어>수예>자수>패키지'</li></ul> |
| 2.0 | <ul><li>'누니액자 보석십자수 패브릭형 액자 60x90 프리미엄 클래식실버 가구/인테리어>수예>보석십자수'</li><li>'돈그림 황금돈나무 거실 현관 행운의 풍수 금전운 그-D 40x80 가구/인테리어>수예>보석십자수'</li><li>'보석십자수 빗 가구/인테리어>수예>보석십자수'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_fi6")
# Run inference
preds = model("탑키드 만들기 경찰관 놀이 세트 3인용 가구/인테리어>수예>기타수예")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 2 | 8.8714 | 24 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 70 |
| 1.0 | 70 |
| 2.0 | 70 |
| 3.0 | 70 |
| 4.0 | 70 |
| 5.0 | 70 |
| 6.0 | 70 |
### Training Hyperparameters
- batch_size: (256, 256)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 50
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0104 | 1 | 0.5007 | - |
| 0.5208 | 50 | 0.4969 | - |
| 1.0417 | 100 | 0.4332 | - |
| 1.5625 | 150 | 0.0551 | - |
| 2.0833 | 200 | 0.0001 | - |
| 2.6042 | 250 | 0.0 | - |
| 3.125 | 300 | 0.0 | - |
| 3.6458 | 350 | 0.0 | - |
| 4.1667 | 400 | 0.0 | - |
| 4.6875 | 450 | 0.0 | - |
| 5.2083 | 500 | 0.0 | - |
| 5.7292 | 550 | 0.0 | - |
| 6.25 | 600 | 0.0 | - |
| 6.7708 | 650 | 0.0 | - |
| 7.2917 | 700 | 0.0 | - |
| 7.8125 | 750 | 0.0 | - |
| 8.3333 | 800 | 0.0 | - |
| 8.8542 | 850 | 0.0 | - |
| 9.375 | 900 | 0.0 | - |
| 9.8958 | 950 | 0.0 | - |
| 10.4167 | 1000 | 0.0 | - |
| 10.9375 | 1050 | 0.0 | - |
| 11.4583 | 1100 | 0.0 | - |
| 11.9792 | 1150 | 0.0 | - |
| 12.5 | 1200 | 0.0 | - |
| 13.0208 | 1250 | 0.0 | - |
| 13.5417 | 1300 | 0.0 | - |
| 14.0625 | 1350 | 0.0 | - |
| 14.5833 | 1400 | 0.0 | - |
| 15.1042 | 1450 | 0.0 | - |
| 15.625 | 1500 | 0.0 | - |
| 16.1458 | 1550 | 0.0 | - |
| 16.6667 | 1600 | 0.0 | - |
| 17.1875 | 1650 | 0.0 | - |
| 17.7083 | 1700 | 0.0 | - |
| 18.2292 | 1750 | 0.0 | - |
| 18.75 | 1800 | 0.0 | - |
| 19.2708 | 1850 | 0.0 | - |
| 19.7917 | 1900 | 0.0 | - |
| 20.3125 | 1950 | 0.0 | - |
| 20.8333 | 2000 | 0.0 | - |
| 21.3542 | 2050 | 0.0 | - |
| 21.875 | 2100 | 0.0 | - |
| 22.3958 | 2150 | 0.0 | - |
| 22.9167 | 2200 | 0.0 | - |
| 23.4375 | 2250 | 0.0 | - |
| 23.9583 | 2300 | 0.0 | - |
| 24.4792 | 2350 | 0.0 | - |
| 25.0 | 2400 | 0.0 | - |
| 25.5208 | 2450 | 0.0 | - |
| 26.0417 | 2500 | 0.0 | - |
| 26.5625 | 2550 | 0.0 | - |
| 27.0833 | 2600 | 0.0 | - |
| 27.6042 | 2650 | 0.0 | - |
| 28.125 | 2700 | 0.0 | - |
| 28.6458 | 2750 | 0.0 | - |
| 29.1667 | 2800 | 0.0 | - |
| 29.6875 | 2850 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "mini1013/master_domain", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "탑키드 만들기 경찰관 놀이 세트 3인용 가구/인테리어>수예>기타수예"}, {"text": "일상공방 코 손뜨개 6종세트 인디핑크 421114 가구/인테리어>수예>뜨개질>완제품"}, {"text": "퀼트가게6마 반폭롤 면 100 20수 도기 프렌즈 WS 792 원단 가구/인테리어>수예>퀼트/펠트>원단"}, {"text": "펠트 구절초 대 SET 환경꾸미기재료 가구/인테리어>수예>퀼트/펠트>도안"}, {"text": "광목침구 촬영용 빈티지 플라워 코튼 포플린 드레스 셔츠 섬유 린넨 대폭원단 가구/인테리어>수예>자수>원단"}], "inference": true, "model-index": [{"name": "SetFit with mini1013/master_domain", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,386 |
nizamovtimur/multilingual-e5-large-videotags
|
nizamovtimur
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1797",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-28T06:32:35Z |
2024-09-28T06:49:06+00:00
| 15 | 0 |
---
base_model: intfloat/multilingual-e5-large
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1797
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: МультНайтШоу. Выпуск 1. В гостях Джо Байден, поет Мультимати МультНайтШоу
- юмористическое нарисованное вечернее шоу, где обсуждаются актуальные новости
страны и мира, в гости приходят герои настоящего, прошлого и даже будущего.В первом
выпуске мы обсудим актуальные новости недели, посмотрим анимационную версию нового
сериала «Дом Дракона», пригласим в нашу студию небезызвестного президента США
Джо Байдена, увидим новый клип от Мультимати и многое другое.
sentences:
- 'Информационные технологии: Искусственный интеллект'
- 'Образование: Высшее образование'
- 'Массовая культура: Юмор и сатира, Новости и политика: Международные новости'
- source_sentence: СarJitsu. 2 сезон, 4 серия. Сушист vs Оксана «Новиков» CarJitsu
— бои в формате POP MMA, где в вместо ринга бойцы сражаются в салоне автомобиля.
В этом выпуске встретились эпатаж и мастерство! Дракон и Японский самурай, Андрей
«Сушист» Мешков и Александр «Оксана» Новиков! Узнайте, кому удалось порулить этой
машиной! Комментирует YURI THE PROFESSIONAL 18+
sentences:
- 'Книги и литература: Комиксы и графические романы'
- 'Спорт: Борьба, Массовая культура'
- 'Религия и духовность: астрология, События и достопримечательности: Комедия и
стендап'
- source_sentence: ВЫПУСК №4 НЕУДОБНЫЙ СТЕНДАП Новогодний выпуск Неудобного Стендапа
уже на канале! На этот раз за 5000 рублей сразятся Иван Бобровников, Аля Кокушкина
и Егор Константинов. Комики расскажут свои шутки посетителям парка ВДНХ, и выступят
в гончарной мастерской, в раздевалке катка и в аниме-магазине. Кто заберёт бабло,
а кто уйдёт ни с чем определят случайные зрители.
sentences:
- Массовая культура, Карьера, Изобразительное искусство
- Хобби и интересы, Транспорт
- 'Массовая культура: Юмор и сатира'
- source_sentence: МАКСИМ НАРОДНЫЙ Выпуск №65 ГОТОВИМ С АКТЁРОМ СЕРИАЛА «САЛЮТ, НАЧАЛЬНИК»
ДЕНИСОМ ЗАЙНУЛЛИНЫМ В новом выпуске у ведущего канала Максима в гостях актёр театра
и кино Денис Зайнуллин. Максим и Денис готовят бешбармак. Это просто и вкусное
блюдо, которое позволит удивить всех гостей. По вопросам рекламы пишите на email
sentences:
- 'Еда и напитки: Кулинария, Массовая культура'
- Массовая культура, Фильмы и анимация
- 'Бизнес и финансы: Бизнес: Бизнес в ИТ'
- source_sentence: Роман Юнусов и популярный озвучер Карен Арутюнов попали в клуб
богачей В новом выпуске шоу «Спортивный Интерес» Рома Юнусов и Карен Арутюнов
почувствуют себя богатеями или даже мафиози. А всё потому, что им предстоит освоить
вид спорта, куда без членства в клубе не попасть. Да, мы отправили героев на поле
для гольфа. Солнце, трава, песок, клюшка - вот неполный список того, что мешало
ребятам бить по мячу. Ну, а кто выполнил драйв и прошёл лунку быстрее оппонента,
ты узнаешь, посмотрев выпуск до конца.
sentences:
- 'Семья и отношения: Уход за пожилыми людьми'
- Массовая культура, Спорт
- 'Транспорт, Спорт: Автогонки, События и достопримечательности: Спортивные события'
---
# SentenceTransformer based on intfloat/multilingual-e5-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) <!-- at revision ab10c1a7f42e74530fe7ae5be82e6d4f11a719eb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("nizamovtimur/multilingual-e5-large-videotags")
# Run inference
sentences = [
'Роман Юнусов и популярный озвучер Карен Арутюнов попали в клуб богачей В новом выпуске шоу «Спортивный Интерес» Рома Юнусов и Карен Арутюнов почувствуют себя богатеями или даже мафиози. А всё потому, что им предстоит освоить вид спорта, куда без членства в клубе не попасть. Да, мы отправили героев на поле для гольфа. Солнце, трава, песок, клюшка - вот неполный список того, что мешало ребятам бить по мячу. Ну, а кто выполнил драйв и прошёл лунку быстрее оппонента, ты узнаешь, посмотрев выпуск до конца.',
'Массовая культура, Спорт',
'Транспорт, Спорт: Автогонки, События и достопримечательности: Спортивные события',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,797 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 117.7 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 14.94 tokens</li><li>max: 36 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| <code>Пример описания по заданному тегу:<br><br>Техно-Гид Выпуск 16 Новые горизонты домашнего кинотеатра! Сегодня мы погрузимся в мир высоких технологий и рассмотрим, как выбрать идеальную систему для вашего дома. От стерео до многоканального звука – мы разберемся, что действительно важно при покупке. Узнаем о последних новинках рынка домашних кинотеатров и научимся создавать атмосферу настоящего кинозала прямо у себя дома. Подписывайтесь на наш канал, чтобы не пропустить интересные обзоры и полезные советы по выбору бытовой электроники!</code> | <code>Информационные технологии: Бытовая электроника: Домашние развлекательные системы</code> |
| <code>Артмеханика. Игра "Угадай технологию". Игра "Угадай технологию".</code> | <code>Информационные технологии, Массовая культура</code> |
| <code>Лесенка знаний: мастерство I 2 серия I Рисовальщик В гараже Артура Лиза рисует портрет Макса, только он получается … не похожим на Макса! Надо научиться рисовать, узнав все тонкости художественного искусства. Для этого Артур отправляет Лизу и Макса в прошлое, чтобы они узнали все самые важные секреты изобразительного искусства от древности до наших времен.</code> | <code>Фильмы и анимация: Семейные и детские фильмы, Фильмы и анимация: Фильмы и анимация </code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 2.2222 | 500 | 0.5639 |
| 4.4444 | 1000 | 0.1195 |
| 6.6667 | 1500 | 0.0818 |
| 8.8889 | 2000 | 0.0728 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.1
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on intfloat/multilingual-e5-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) <!-- at revision ab10c1a7f42e74530fe7ae5be82e6d4f11a719eb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("nizamovtimur/multilingual-e5-large-videotags")
# Run inference
sentences = [
'Роман Юнусов и популярный озвучер Карен Арутюнов попали в клуб богачей В новом выпуске шоу «Спортивный Интерес» Рома Юнусов и Карен Арутюнов почувствуют себя богатеями или даже мафиози. А всё потому, что им предстоит освоить вид спорта, куда без членства в клубе не попасть. Да, мы отправили героев на поле для гольфа. Солнце, трава, песок, клюшка - вот неполный список того, что мешало ребятам бить по мячу. Ну, а кто выполнил драйв и прошёл лунку быстрее оппонента, ты узнаешь, посмотрев выпуск до конца.',
'Массовая культура, Спорт',
'Транспорт, Спорт: Автогонки, События и достопримечательности: Спортивные события',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,797 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 117.7 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 14.94 tokens</li><li>max: 36 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| <code>Пример описания по заданному тегу:<br><br>Техно-Гид Выпуск 16 Новые горизонты домашнего кинотеатра! Сегодня мы погрузимся в мир высоких технологий и рассмотрим, как выбрать идеальную систему для вашего дома. От стерео до многоканального звука – мы разберемся, что действительно важно при покупке. Узнаем о последних новинках рынка домашних кинотеатров и научимся создавать атмосферу настоящего кинозала прямо у себя дома. Подписывайтесь на наш канал, чтобы не пропустить интересные обзоры и полезные советы по выбору бытовой электроники!</code> | <code>Информационные технологии: Бытовая электроника: Домашние развлекательные системы</code> |
| <code>Артмеханика. Игра "Угадай технологию". Игра "Угадай технологию".</code> | <code>Информационные технологии, Массовая культура</code> |
| <code>Лесенка знаний: мастерство I 2 серия I Рисовальщик В гараже Артура Лиза рисует портрет Макса, только он получается … не похожим на Макса! Надо научиться рисовать, узнав все тонкости художественного искусства. Для этого Артур отправляет Лизу и Макса в прошлое, чтобы они узнали все самые важные секреты изобразительного искусства от древности до наших времен.</code> | <code>Фильмы и анимация: Семейные и детские фильмы, Фильмы и анимация: Фильмы и анимация </code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 2.2222 | 500 | 0.5639 |
| 4.4444 | 1000 | 0.1195 |
| 6.6667 | 1500 | 0.0818 |
| 8.8889 | 2000 | 0.0728 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.1
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "intfloat/multilingual-e5-large", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1797", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "МультНайтШоу. Выпуск 1. В гостях Джо Байден, поет Мультимати МультНайтШоу - юмористическое нарисованное вечернее шоу, где обсуждаются актуальные новости страны и мира, в гости приходят герои настоящего, прошлого и даже будущего.В первом выпуске мы обсудим актуальные новости недели, посмотрим анимационную версию нового сериала «Дом Дракона», пригласим в нашу студию небезызвестного президента США Джо Байдена, увидим новый клип от Мультимати и многое другое.", "sentences": ["Информационные технологии: Искусственный интеллект", "Образование: Высшее образование", "Массовая культура: Юмор и сатира, Новости и политика: Международные новости"]}, {"source_sentence": "СarJitsu. 2 сезон, 4 серия. Сушист vs Оксана «Новиков» CarJitsu — бои в формате POP MMA, где в вместо ринга бойцы сражаются в салоне автомобиля. В этом выпуске встретились эпатаж и мастерство! Дракон и Японский самурай, Андрей «Сушист» Мешков и Александр «Оксана» Новиков! Узнайте, кому удалось порулить этой машиной! Комментирует YURI THE PROFESSIONAL 18+", "sentences": ["Книги и литература: Комиксы и графические романы", "Спорт: Борьба, Массовая культура", "Религия и духовность: астрология, События и достопримечательности: Комедия и стендап"]}, {"source_sentence": "ВЫПУСК №4 НЕУДОБНЫЙ СТЕНДАП Новогодний выпуск Неудобного Стендапа уже на канале! На этот раз за 5000 рублей сразятся Иван Бобровников, Аля Кокушкина и Егор Константинов. Комики расскажут свои шутки посетителям парка ВДНХ, и выступят в гончарной мастерской, в раздевалке катка и в аниме-магазине. Кто заберёт бабло, а кто уйдёт ни с чем определят случайные зрители.", "sentences": ["Массовая культура, Карьера, Изобразительное искусство", "Хобби и интересы, Транспорт", "Массовая культура: Юмор и сатира"]}, {"source_sentence": "МАКСИМ НАРОДНЫЙ Выпуск №65 ГОТОВИМ С АКТЁРОМ СЕРИАЛА «САЛЮТ, НАЧАЛЬНИК» ДЕНИСОМ ЗАЙНУЛЛИНЫМ В новом выпуске у ведущего канала Максима в гостях актёр театра и кино Денис Зайнуллин. Максим и Денис готовят бешбармак. Это просто и вкусное блюдо, которое позволит удивить всех гостей. По вопросам рекламы пишите на email", "sentences": ["Еда и напитки: Кулинария, Массовая культура", "Массовая культура, Фильмы и анимация", "Бизнес и финансы: Бизнес: Бизнес в ИТ"]}, {"source_sentence": "Роман Юнусов и популярный озвучер Карен Арутюнов попали в клуб богачей В новом выпуске шоу «Спортивный Интерес» Рома Юнусов и Карен Арутюнов почувствуют себя богатеями или даже мафиози. А всё потому, что им предстоит освоить вид спорта, куда без членства в клубе не попасть. Да, мы отправили героев на поле для гольфа. Солнце, трава, песок, клюшка - вот неполный список того, что мешало ребятам бить по мячу. Ну, а кто выполнил драйв и прошёл лунку быстрее оппонента, ты узнаешь, посмотрев выпуск до конца.", "sentences": ["Семья и отношения: Уход за пожилыми людьми", "Массовая культура, Спорт", "Транспорт, Спорт: Автогонки, События и достопримечательности: Спортивные события"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,387 |
EP9/bert2bert_shared-spanish-finetuned-summarization-finetuned-xsum
|
EP9
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-11-20T23:09:59Z |
2022-11-21T16:12:27+00:00
| 24 | 0 |
---
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: bert2bert_shared-spanish-finetuned-summarization-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert2bert_shared-spanish-finetuned-summarization-finetuned-xsum
This model is a fine-tuned version of [mrm8488/bert2bert_shared-spanish-finetuned-summarization](https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3690
- Rouge1: 50.02
- Rouge2: 35.706
- Rougel: 46.6253
- Rougelsum: 46.6412
- Gen Len: 22.1176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.5969 | 1.0 | 3090 | 2.4559 | 49.4282 | 35.2705 | 46.095 | 46.0994 | 22.5422 |
| 2.3318 | 2.0 | 6180 | 2.3690 | 50.02 | 35.706 | 46.6253 | 46.6412 | 22.1176 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert2bert_shared-spanish-finetuned-summarization-finetuned-xsum
This model is a fine-tuned version of [mrm8488/bert2bert_shared-spanish-finetuned-summarization](https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3690
- Rouge1: 50.02
- Rouge2: 35.706
- Rougel: 46.6253
- Rougelsum: 46.6412
- Gen Len: 22.1176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.5969 | 1.0 | 3090 | 2.4559 | 49.4282 | 35.2705 | 46.095 | 46.0994 | 22.5422 |
| 2.3318 | 2.0 | 6180 | 2.3690 | 50.02 | 35.706 | 46.6253 | 46.6412 | 22.1176 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
{"metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert2bert_shared-spanish-finetuned-summarization-finetuned-xsum", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,388 |
eslamxm/mt5-base-finetuned-Spanish
|
eslamxm
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"es",
"spanish",
"abstractive summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-06-14T18:45:17Z |
2022-06-15T05:13:08+00:00
| 353 | 2 |
---
datasets:
- wiki_lingua
license: apache-2.0
tags:
- summarization
- mt5
- es
- spanish
- abstractive summarization
- generated_from_trainer
model-index:
- name: mt5-base-finetuned-Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-Spanish
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1727
- Rouge-1: 28.11
- Rouge-2: 12.09
- Rouge-l: 24.62
- Gen Len: 18.73
- Bertscore: 72.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-Spanish
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1727
- Rouge-1: 28.11
- Rouge-2: 12.09
- Rouge-l: 24.62
- Gen Len: 18.73
- Bertscore: 72.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
{"datasets": ["wiki_lingua"], "license": "apache-2.0", "tags": ["summarization", "mt5", "es", "spanish", "abstractive summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-base-finetuned-Spanish", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,389 |
bergalli/distilbert-base-uncased-finetuned-cola
|
bergalli
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-26T19:18:09Z |
2023-02-26T21:07:04+00:00
| 10 | 0 |
---
datasets:
- glue
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- type: matthews_correlation
value: 0.5373623427702773
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7930
- Matthews Correlation: 0.5374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.524 | 1.0 | 535 | 0.5474 | 0.3948 |
| 0.3484 | 2.0 | 1070 | 0.5186 | 0.4738 |
| 0.2384 | 3.0 | 1605 | 0.5594 | 0.5268 |
| 0.1759 | 4.0 | 2140 | 0.7930 | 0.5374 |
| 0.1229 | 5.0 | 2675 | 0.8384 | 0.5314 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7930
- Matthews Correlation: 0.5374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.524 | 1.0 | 535 | 0.5474 | 0.3948 |
| 0.3484 | 2.0 | 1070 | 0.5186 | 0.4738 |
| 0.2384 | 3.0 | 1605 | 0.5594 | 0.5268 |
| 0.1759 | 4.0 | 2140 | 0.7930 | 0.5374 |
| 0.1229 | 5.0 | 2675 | 0.8384 | 0.5314 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5373623427702773, "name": "Matthews Correlation"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,390 |
mwong/roberta-base-climate-evidence-related
|
mwong
|
text-classification
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-evidence-related",
"dataset:mwong/climate-evidence-related",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-04-20T12:52:55Z |
2022-06-24T03:34:04+00:00
| 97 | 1 |
---
datasets:
- mwong/fever-evidence-related
- mwong/climate-evidence-related
language: en
license: mit
metrics: f1
tags:
- text classification
- fact checking
widget:
- text: Earth’s changing climate is a critical issue and poses the risk of significant
environmental, social and economic disruptions around the globe.</s></s>Because
of fears of climate change and adverse effects of drilling explosions and oil
spills in the Gulf of Mexico, legislation has been considered, and governmental
regulations and orders have been issued, which, combined with the local economic
and employment conditions caused by both, could materially adversely impact the
oil and gas industries and the economic health of areas in which a significant
number of our stores are located.
example_title: Evidence related to claim
---
# ClimateRoberta
ClimateRoberta is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 80.13% with test dataset "mwong/climate-evidence-related". Using pretrained roberta-base model, the classifier head is trained on Fever dataset and adapted to climate domain using ClimateFever dataset.
| null |
Non_BioNLP
|
# ClimateRoberta
ClimateRoberta is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 80.13% with test dataset "mwong/climate-evidence-related". Using pretrained roberta-base model, the classifier head is trained on Fever dataset and adapted to climate domain using ClimateFever dataset.
|
{"datasets": ["mwong/fever-evidence-related", "mwong/climate-evidence-related"], "language": "en", "license": "mit", "metrics": "f1", "tags": ["text classification", "fact checking"], "widget": [{"text": "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located.", "example_title": "Evidence related to claim"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,391 |
bardsai/twitter-emotion-pl-fast
|
bardsai
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"twitter",
"pl",
"dataset:datasets/tweet_eval",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-19T11:04:43Z |
2023-09-18T08:24:50+00:00
| 21 | 0 |
---
datasets:
- datasets/tweet_eval
language: pl
metrics:
- f1
- accuracy
- precision
- recall
tags:
- text-classification
- twitter
widget:
- text: Nigdy przegrana nie sprawiła mi takiej radości. Szczęście i Opatrzność mają
znaczenie Gratuluje @pzpn_pl
example_title: Example 1
- text: Osoby z Ukrainy zapłacą za życie w centrach pomocy? Sprzeczne prawem UE, niehumanitarne,
okrutne.
example_title: Example 2
---
# Twitter emotion PL (fast)
Twitter emotion PL (fast) is a model based on [distiluse](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) for analyzing emotion of Polish twitter posts. It was trained on the translated version of [TweetEval](https://www.researchgate.net/publication/347233661_TweetEval_Unified_Benchmark_and_Comparative_Evaluation_for_Tweet_Classification) by Barbieri et al., 2020 for 10 epochs on single RTX3090 gpu.
The model will give you a four labels: joy, optimism, sadness and anger.
## How to use
You can use this model directly with a pipeline for text classification:
```python
from transformers import pipeline
nlp = pipeline("text-classification", model="bardsai/twitter-emotion-pl-fast")
nlp("Nigdy przegrana nie sprawiła mi takiej radości. Szczęście i Opatrzność mają znaczenie Gratuluje @pzpn_pl")
```
```bash
[{'label': 'joy', 'score': 0.7068771123886108}]
```
## Performance
| Metric | Value |
| --- | ----------- |
| f1 macro | 0.692 |
| precision macro | 0.700 |
| recall macro | 687 |
| accuracy | 0.737 |
| samples per second | 255.2 |
(The performance was evaluated on RTX 3090 gpu)
## Changelog
- 2023-07-19: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: [bards.ai](https://bards.ai/)
Let us know if you use our model :). Also, if you need any help, feel free to contact us at [email protected]
| null |
Non_BioNLP
|
# Twitter emotion PL (fast)
Twitter emotion PL (fast) is a model based on [distiluse](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) for analyzing emotion of Polish twitter posts. It was trained on the translated version of [TweetEval](https://www.researchgate.net/publication/347233661_TweetEval_Unified_Benchmark_and_Comparative_Evaluation_for_Tweet_Classification) by Barbieri et al., 2020 for 10 epochs on single RTX3090 gpu.
The model will give you a four labels: joy, optimism, sadness and anger.
## How to use
You can use this model directly with a pipeline for text classification:
```python
from transformers import pipeline
nlp = pipeline("text-classification", model="bardsai/twitter-emotion-pl-fast")
nlp("Nigdy przegrana nie sprawiła mi takiej radości. Szczęście i Opatrzność mają znaczenie Gratuluje @pzpn_pl")
```
```bash
[{'label': 'joy', 'score': 0.7068771123886108}]
```
## Performance
| Metric | Value |
| --- | ----------- |
| f1 macro | 0.692 |
| precision macro | 0.700 |
| recall macro | 687 |
| accuracy | 0.737 |
| samples per second | 255.2 |
(The performance was evaluated on RTX 3090 gpu)
## Changelog
- 2023-07-19: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: [bards.ai](https://bards.ai/)
Let us know if you use our model :). Also, if you need any help, feel free to contact us at [email protected]
|
{"datasets": ["datasets/tweet_eval"], "language": "pl", "metrics": ["f1", "accuracy", "precision", "recall"], "tags": ["text-classification", "twitter"], "widget": [{"text": "Nigdy przegrana nie sprawiła mi takiej radości. Szczęście i Opatrzność mają znaczenie Gratuluje @pzpn_pl", "example_title": "Example 1"}, {"text": "Osoby z Ukrainy zapłacą za życie w centrach pomocy? Sprzeczne prawem UE, niehumanitarne, okrutne.", "example_title": "Example 2"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,392 |
pinzhenchen/sft-lora-en-bloom-1b1
|
pinzhenchen
| null |
[
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 2024-03-05T23:46:11Z |
2024-03-05T23:46:14+00:00
| 0 | 0 |
---
language:
- en
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
| null |
Non_BioNLP
|
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
{"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,393 |
Realgon/N_bert_imdb_padding80model
|
Realgon
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-12-14T01:21:25Z |
2023-12-14T03:46:30+00:00
| 5 | 0 |
---
base_model: bert-base-uncased
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: N_bert_imdb_padding80model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.93712
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# N_bert_imdb_padding80model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Accuracy: 0.9371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2198 | 1.0 | 1563 | 0.2296 | 0.9239 |
| 0.1582 | 2.0 | 3126 | 0.2158 | 0.9298 |
| 0.0896 | 3.0 | 4689 | 0.3067 | 0.9335 |
| 0.0635 | 4.0 | 6252 | 0.3594 | 0.9304 |
| 0.0344 | 5.0 | 7815 | 0.3923 | 0.9299 |
| 0.0315 | 6.0 | 9378 | 0.4625 | 0.9343 |
| 0.0196 | 7.0 | 10941 | 0.4629 | 0.9338 |
| 0.0205 | 8.0 | 12504 | 0.5247 | 0.9252 |
| 0.0161 | 9.0 | 14067 | 0.4549 | 0.9326 |
| 0.0105 | 10.0 | 15630 | 0.4703 | 0.9323 |
| 0.0049 | 11.0 | 17193 | 0.6050 | 0.9286 |
| 0.0088 | 12.0 | 18756 | 0.5788 | 0.9353 |
| 0.0043 | 13.0 | 20319 | 0.5495 | 0.9348 |
| 0.0062 | 14.0 | 21882 | 0.6886 | 0.9307 |
| 0.0019 | 15.0 | 23445 | 0.6479 | 0.9348 |
| 0.0035 | 16.0 | 25008 | 0.6449 | 0.9360 |
| 0.0008 | 17.0 | 26571 | 0.7024 | 0.9349 |
| 0.0003 | 18.0 | 28134 | 0.7011 | 0.9370 |
| 0.001 | 19.0 | 29697 | 0.6921 | 0.9372 |
| 0.0 | 20.0 | 31260 | 0.6996 | 0.9371 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# N_bert_imdb_padding80model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Accuracy: 0.9371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2198 | 1.0 | 1563 | 0.2296 | 0.9239 |
| 0.1582 | 2.0 | 3126 | 0.2158 | 0.9298 |
| 0.0896 | 3.0 | 4689 | 0.3067 | 0.9335 |
| 0.0635 | 4.0 | 6252 | 0.3594 | 0.9304 |
| 0.0344 | 5.0 | 7815 | 0.3923 | 0.9299 |
| 0.0315 | 6.0 | 9378 | 0.4625 | 0.9343 |
| 0.0196 | 7.0 | 10941 | 0.4629 | 0.9338 |
| 0.0205 | 8.0 | 12504 | 0.5247 | 0.9252 |
| 0.0161 | 9.0 | 14067 | 0.4549 | 0.9326 |
| 0.0105 | 10.0 | 15630 | 0.4703 | 0.9323 |
| 0.0049 | 11.0 | 17193 | 0.6050 | 0.9286 |
| 0.0088 | 12.0 | 18756 | 0.5788 | 0.9353 |
| 0.0043 | 13.0 | 20319 | 0.5495 | 0.9348 |
| 0.0062 | 14.0 | 21882 | 0.6886 | 0.9307 |
| 0.0019 | 15.0 | 23445 | 0.6479 | 0.9348 |
| 0.0035 | 16.0 | 25008 | 0.6449 | 0.9360 |
| 0.0008 | 17.0 | 26571 | 0.7024 | 0.9349 |
| 0.0003 | 18.0 | 28134 | 0.7011 | 0.9370 |
| 0.001 | 19.0 | 29697 | 0.6921 | 0.9372 |
| 0.0 | 20.0 | 31260 | 0.6996 | 0.9371 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "bert-base-uncased", "datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "N_bert_imdb_padding80model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93712, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,394 |
LoneStriker/airoboros-m-7b-3.1-6.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:jondurbin/airoboros-3.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-10-15T21:58:57Z |
2023-10-15T21:59:21+00:00
| 9 | 0 |
---
datasets:
- jondurbin/airoboros-3.1
license: apache-2.0
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airoboros 3.1 models use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-m-7b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
| null |
Non_BioNLP
|
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airoboros 3.1 models use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-m-7b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
{"datasets": ["jondurbin/airoboros-3.1"], "license": "apache-2.0"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,395 |
Helsinki-NLP/opus-mt-tc-bible-big-fiu-fra_ita_por_spa
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"chm",
"es",
"et",
"fi",
"fkv",
"fr",
"hu",
"it",
"krl",
"liv",
"mrj",
"myv",
"pt",
"se",
"udm",
"vep",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-08T08:01:21Z |
2024-10-08T08:01:36+00:00
| 13 | 0 |
---
language:
- chm
- es
- et
- fi
- fkv
- fr
- hu
- it
- krl
- liv
- mrj
- myv
- pt
- se
- udm
- vep
library_name: transformers
license: apache-2.0
tags:
- translation
- opus-mt-tc-bible
model-index:
- name: opus-mt-tc-bible-big-fiu-fra_ita_por_spa
results:
- task:
type: translation
name: Translation multi-multi
dataset:
name: tatoeba-test-v2020-07-28-v2023-09-26
type: tatoeba_mt
args: multi-multi
metrics:
- type: bleu
value: 48.5
name: BLEU
- type: chrf
value: 0.67829
name: chr-F
---
# opus-mt-tc-bible-big-fiu-fra_ita_por_spa
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Finno-Ugrian languages (fiu) to unknown (fra+ita+por+spa).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-08-17
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): chm est fin fkv hun krl liv mrj myv sme udm vep vro
- Target Language(s): fra ita por spa
- Valid Target Language Labels: >>fra<< >>ita<< >>por<< >>spa<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/fiu-fra%2Bita%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>fra<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ita<< Hány fényképet csináltatok?",
">>ita<< Gazdag vagyok."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-fiu-fra_ita_por_spa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Quante foto avete scattato?
# Sono ricco.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-fiu-fra_ita_por_spa")
print(pipe(">>ita<< Hány fényképet csináltatok?"))
# expected output: Quante foto avete scattato?
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/fiu-fra%2Bita%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.67829 | 48.5 | 10000 | 65198 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 11:00:58 EEST 2024
* port machine: LM0-400-22516.local
| null |
Non_BioNLP
|
# opus-mt-tc-bible-big-fiu-fra_ita_por_spa
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Finno-Ugrian languages (fiu) to unknown (fra+ita+por+spa).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-08-17
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): chm est fin fkv hun krl liv mrj myv sme udm vep vro
- Target Language(s): fra ita por spa
- Valid Target Language Labels: >>fra<< >>ita<< >>por<< >>spa<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/fiu-fra%2Bita%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>fra<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ita<< Hány fényképet csináltatok?",
">>ita<< Gazdag vagyok."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-fiu-fra_ita_por_spa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Quante foto avete scattato?
# Sono ricco.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-fiu-fra_ita_por_spa")
print(pipe(">>ita<< Hány fényképet csináltatok?"))
# expected output: Quante foto avete scattato?
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/fiu-fra%2Bita%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.67829 | 48.5 | 10000 | 65198 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 11:00:58 EEST 2024
* port machine: LM0-400-22516.local
|
{"language": ["chm", "es", "et", "fi", "fkv", "fr", "hu", "it", "krl", "liv", "mrj", "myv", "pt", "se", "udm", "vep"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "opus-mt-tc-bible"], "model-index": [{"name": "opus-mt-tc-bible-big-fiu-fra_ita_por_spa", "results": [{"task": {"type": "translation", "name": "Translation multi-multi"}, "dataset": {"name": "tatoeba-test-v2020-07-28-v2023-09-26", "type": "tatoeba_mt", "args": "multi-multi"}, "metrics": [{"type": "bleu", "value": 48.5, "name": "BLEU"}, {"type": "chrf", "value": 0.67829, "name": "chr-F"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,396 |
gokuls/bert_12_layer_model_v3_48_emotion
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:gokuls/bert_12_layer_model_v3_complete_training_48",
"base_model:finetune:gokuls/bert_12_layer_model_v3_complete_training_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-07T13:00:51Z |
2023-10-07T13:11:15+00:00
| 5 | 0 |
---
base_model: gokuls/bert_12_layer_model_v3_complete_training_48
datasets:
- emotion
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_12_layer_model_v3_48_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.899
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v3_48_emotion
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v3_complete_training_48](https://huggingface.co/gokuls/bert_12_layer_model_v3_complete_training_48) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Accuracy: 0.899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9112 | 1.0 | 250 | 0.5176 | 0.8495 |
| 0.389 | 2.0 | 500 | 0.3617 | 0.8755 |
| 0.2894 | 3.0 | 750 | 0.3037 | 0.8905 |
| 0.2359 | 4.0 | 1000 | 0.3346 | 0.895 |
| 0.1883 | 5.0 | 1250 | 0.3178 | 0.8955 |
| 0.1638 | 6.0 | 1500 | 0.3597 | 0.897 |
| 0.1217 | 7.0 | 1750 | 0.4075 | 0.8895 |
| 0.0962 | 8.0 | 2000 | 0.4023 | 0.899 |
| 0.0732 | 9.0 | 2250 | 0.4479 | 0.8955 |
| 0.0569 | 10.0 | 2500 | 0.4894 | 0.8985 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v3_48_emotion
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v3_complete_training_48](https://huggingface.co/gokuls/bert_12_layer_model_v3_complete_training_48) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Accuracy: 0.899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9112 | 1.0 | 250 | 0.5176 | 0.8495 |
| 0.389 | 2.0 | 500 | 0.3617 | 0.8755 |
| 0.2894 | 3.0 | 750 | 0.3037 | 0.8905 |
| 0.2359 | 4.0 | 1000 | 0.3346 | 0.895 |
| 0.1883 | 5.0 | 1250 | 0.3178 | 0.8955 |
| 0.1638 | 6.0 | 1500 | 0.3597 | 0.897 |
| 0.1217 | 7.0 | 1750 | 0.4075 | 0.8895 |
| 0.0962 | 8.0 | 2000 | 0.4023 | 0.899 |
| 0.0732 | 9.0 | 2250 | 0.4479 | 0.8955 |
| 0.0569 | 10.0 | 2500 | 0.4894 | 0.8985 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "gokuls/bert_12_layer_model_v3_complete_training_48", "datasets": ["emotion"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_12_layer_model_v3_48_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.899, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,397 |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task807
|
Lots-of-LoRAs
| null |
[
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 2025-01-01T14:24:38Z |
2025-01-01T14:24:45+00:00
| 0 | 0 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task807
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task807_pawsx_chinese_english_translation
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task807_pawsx_chinese_english_translation sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task807
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task807_pawsx_chinese_english_translation
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task807_pawsx_chinese_english_translation sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
|
task
|
[
"TRANSLATION"
] | 45,398 |
Helsinki-NLP/opus-mt-tc-bible-big-sem-en
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"am",
"ar",
"arc",
"en",
"hbo",
"he",
"jpa",
"mt",
"oar",
"phn",
"sgw",
"syc",
"syr",
"ti",
"tig",
"tmr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-08T14:06:16Z |
2024-10-08T19:15:46+00:00
| 21 | 1 |
---
language:
- am
- ar
- arc
- en
- hbo
- he
- jpa
- mt
- oar
- phn
- sgw
- syc
- syr
- ti
- tig
- tmr
library_name: transformers
license: apache-2.0
tags:
- translation
- opus-mt-tc-bible
model-index:
- name: opus-mt-tc-bible-big-sem-en
results:
- task:
type: translation
name: Translation multi-eng
dataset:
name: tatoeba-test-v2020-07-28-v2023-09-26
type: tatoeba_mt
args: multi-eng
metrics:
- type: bleu
value: 48.5
name: BLEU
- type: chrf
value: 0.64511
name: chr-F
---
# opus-mt-tc-bible-big-sem-en
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Semitic languages (sem) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-08-17
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): acm afb amh apc ara arc arq arz hbo heb jpa mlt oar phn sgw syc syr tig tir tmr
- Target Language(s): eng
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/sem-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"تعال لتأخذنا.",
"ربما سيغير رأيه."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-sem-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Come and get us.
# Maybe he'll change his mind.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-sem-en")
print(pipe("تعال لتأخذنا."))
# expected output: Come and get us.
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/sem-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| multi-eng | tatoeba-test-v2020-07-28-v2023-09-26 | 0.64511 | 48.5 | 10000 | 71553 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 22:15:32 EEST 2024
* port machine: LM0-400-22516.local
| null |
Non_BioNLP
|
# opus-mt-tc-bible-big-sem-en
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Semitic languages (sem) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-08-17
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): acm afb amh apc ara arc arq arz hbo heb jpa mlt oar phn sgw syc syr tig tir tmr
- Target Language(s): eng
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/sem-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"تعال لتأخذنا.",
"ربما سيغير رأيه."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-sem-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Come and get us.
# Maybe he'll change his mind.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-sem-en")
print(pipe("تعال لتأخذنا."))
# expected output: Come and get us.
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/sem-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| multi-eng | tatoeba-test-v2020-07-28-v2023-09-26 | 0.64511 | 48.5 | 10000 | 71553 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 22:15:32 EEST 2024
* port machine: LM0-400-22516.local
|
{"language": ["am", "ar", "arc", "en", "hbo", "he", "jpa", "mt", "oar", "phn", "sgw", "syc", "syr", "ti", "tig", "tmr"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "opus-mt-tc-bible"], "model-index": [{"name": "opus-mt-tc-bible-big-sem-en", "results": [{"task": {"type": "translation", "name": "Translation multi-eng"}, "dataset": {"name": "tatoeba-test-v2020-07-28-v2023-09-26", "type": "tatoeba_mt", "args": "multi-eng"}, "metrics": [{"type": "bleu", "value": 48.5, "name": "BLEU"}, {"type": "chrf", "value": 0.64511, "name": "chr-F"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,399 |
AlekseyCalvin/HistoricColorSoonr_v2_FluxSchnell_Diffusers
|
AlekseyCalvin
|
text-to-image
|
[
"diffusers",
"safetensors",
"Flux",
"FluxPipeline",
"text-to-image",
"flux schnell",
"image-generation",
"flux-diffusers",
"photo",
"realism",
"en",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:finetune:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] | 2024-09-24T07:16:12Z |
2025-04-05T00:50:10+00:00
| 10 | 1 |
---
base_model: black-forest-labs/FLUX.1-schnell
language:
- en
library_name: diffusers
license: apache-2.0
pipeline_tag: text-to-image
tags:
- Flux
- FluxPipeline
- text-to-image
- flux schnell
- image-generation
- flux-diffusers
- diffusers
- photo
- realism
emoji: 🔜
instance_prompt: HST autochrome photo
widget:
- text: HST style autochrome photo of a young woman playing poker against a blue-feathered
dinosaur sitting across from her, moderately wrinkled blemished lined skin texture
with pores
output:
url: Hstv2r.png
- text: (w/ our Mayakovsky LoRA) HST photo of Mayakovsky sleeping, seeing a dream
wherein rice shoots bud on lush green fields, text \MAYAKOVSKY SAW A DREAM\
output:
url: Hstv2Mayak.webp
- text: HST style photo of a young woman playing a Telecaster electric guitar and
singing the blues
output:
url: Hstv2guitar.webp
- text: hst style photo of an aging dark-haired woman playing guitar in an old Soviet
apartment
output:
url: Hstv2r3.png
- text: hst style photo of a young dark-haired woman embracing a red-feathered dinosaur
output:
url: HistoricI.png
- text: hst style autochrome vintage color photo of gigantic Rosa Luxemburg walking
over iced-over planet Earth
output:
url: Hst2legs.png
---
# **Historic Color Soon® V.2**
The second **FLUX**-based & open-licensed full-model checkpoint in our **HSToric Color** series.<br>
Trained on HD scans of early color photos (circa *1900s-1910s*) by **Sergey Prokudin-Gorsky**, who traveled and photographed widely in those years whilst perfecting implementations of a pioneering 3-color-composite photography technique.<br>
**This model is aimed at being useful for**:<br>
- Quality generation at a low step-count (2 to 8, for most scenarios), with 4-step inference at around 768x768 routinely producing photorealistic outputs at a quality plausibly preferrable to that of **Flux v.1 Dev**. <br>
- Producing realistic images reminiscent of color film analog photography, exhibiting parallels to a broad spectrum of iconic instrumentalities and visual paradigms, from Autochrome-to-Kodachrome-to-Fujifilm-and-beyond. <br>
- Producing visuals with a vaguely "historical" or "lived-in" aesthetic character, striking chromaticity and luminosity dynamics, as well as textural/anatomical/skin details more reliably lifelike than other models at a comparable step-count/resource expenditure. <br>
- Extending realism options under an unrevokable commercial license. <br>
<Gallery />
## Testing Space:
You may try out the **V2** checkpoint at [one of our LoRA gallery spaces](https://huggingface.co/spaces/AlekseyCalvin/soonfactory4), along with many of our trained LoRAs!<br>
## Bit of Model History + TOOL SHARES:
[Historic Color Soon® V.1](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) was fine-tuned by us from [HumbleMikey](https://civitai.com/user/humblemikey)'s [Pixelwave Schnell V.1](https://huggingface.co/mikeyandfriends/PixelWave_FLUX.1-schnell_01/) model which, in its turn, is a generalized base checkpoint trained from [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) by **Black Forest Labs**, consolidating (in comparison w/vanilla-base-**Schnell**) further inference speed improvements (more reliable results at 2-3 steps), whilst raising the overall quality and consistency standards across most aesthetic categories and at every step.<br>
This version, **Historic Color Soon® V.2** was created through merging into **V.1** a handful of LoRAs trained by us on the (fairly narrow) available range of realistic Flux checkpoint models that are exclusively **Schnell**-derived, so as to stay within the fairly open **Apache 2.0** licensing domain (which was among our reasons to do all this in the first place).<br>
**Historic Color Soon® V.1** is available [here](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) in both **Safetensors** (fp8) & **Diffusers** formats.<br>
To fine-tune **Flux**, try the dedicated [Flux Training Notebook by Ostris](https://github.com/ostris/ai-toolkit/blob/main/notebooks/FLUX_1_schnell_LoRA_Training.ipynb).<br>
**Ostris**' training adapter for **Schnell** is found here: [ostris/FLUX.1-schnell-training-adapter](https://huggingface.co/ostris/FLUX.1-schnell-training-adapter).<br>
To merge **Flux*** models and LoRAs, use the *'flux_merge_lora.py'* script from the sd3-branch & /networks (subfolder) of [Kohya-ss's sd-scripts git](https://github.com/kohya-ss/sd-scripts/tree/sd3).<br>
## Bit of Actual History:
**Prokudin-Gorsky**'s color photography technique would involve three photo-exposures, either simultaneous or sequential, using specialized color-spectrum filters (basically R.B.G.: red, blue, and green), rendering a subject/shot onto glass plates covered with light-emulsive mixture.<br>
The photographer's focus on refining the developer and filter quality, in tandem with his incessant and wide-ranging experimentation, and his artful optimizations of glass plates (generally unwieldly, esp. for color, and by the 1910's already becoming outmoded for B&W on-location shoots, though elsewise extra reliable) ultimately led him to produce a color photography oeuvre of much greater fidelity and vividness than achieved by most of his contemporaries.<br>
At the same time, the peculiarities of the photographer's method, coupled with his exceptionally hands-on execution thereof, would manifest in a range of idyosyncratic color, light, and motion artifacts common across the resulting prints.<br>
Seldom marring the image as a whole, and less grave than the weaknesses of some cp-emerging autochrome techniques, the warm color hazes & flares framing many of **Prokudin-Gorsky**'s prints constitute a kind of ephemeral signature.<br>
Alongside some of the more subtle chromatic, textural, and (in some measure) figural characteristics of his work, these auras have reliably imprinted themselves into this and other LoRAs and Models within our gallery of fine-tunes for Flux and StableDiffusion3.5, fine-tuned exclusively on non-synthetic (human-made and pre-curated) open-access data from iconic, influential, and/or otherwise compelling historical sources.<br>
We urge you to explore the works of **Prokudin-Gorsky** for yourself, at the wonderfully organized online [archive at this link](https://prokudin-gorsky.org/), featuring many hundreds of high quality downloadable scans of composite color photo prints from the photographer's original glass plate negatives, available at this site alongside relatively recent restorations of a substantial portion of the images. The original glass-plate negatives are currently held at and administrated by the Library of Congress in Washington, DC, USA. <br>
## Diffusers:<br>
To use `Historic Color SOON® V.2` with the 🧨 diffusers python library, first install or upgrade diffusers:<br>
```shell
pip install -U diffusers
```
Then you can use `FluxPipeline` to run the model:
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("AlekseyCalvin/HistoricColorSoonr_v2_FluxSchnell_Diffusers", torch_dtype=torch.bfloat16)
pipe.to("cuda")
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "HST style autochrome film photograph portrait of 1910 woman playing poker against a purple feathered dinosaur, the green-eyed woman has moderately blemished skin with visible lines and pores, she smiles, film grain, Kodachrome"
image = pipe(
prompt,
guidance_scale=1.2,
num_inference_steps=4,
max_sequence_length=256,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("hstcolor1.png")
```
To learn more check out the [diffusers](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) documentation.
<br>
Lastly, if you're into literature broadly and old modernist poetry specifically, check out our verse translations at [SILVER AGE POETS](https://www.SilverAgePoets.com/the-poets-and-their-stories)!
| null |
Non_BioNLP
|
# **Historic Color Soon® V.2**
The second **FLUX**-based & open-licensed full-model checkpoint in our **HSToric Color** series.<br>
Trained on HD scans of early color photos (circa *1900s-1910s*) by **Sergey Prokudin-Gorsky**, who traveled and photographed widely in those years whilst perfecting implementations of a pioneering 3-color-composite photography technique.<br>
**This model is aimed at being useful for**:<br>
- Quality generation at a low step-count (2 to 8, for most scenarios), with 4-step inference at around 768x768 routinely producing photorealistic outputs at a quality plausibly preferrable to that of **Flux v.1 Dev**. <br>
- Producing realistic images reminiscent of color film analog photography, exhibiting parallels to a broad spectrum of iconic instrumentalities and visual paradigms, from Autochrome-to-Kodachrome-to-Fujifilm-and-beyond. <br>
- Producing visuals with a vaguely "historical" or "lived-in" aesthetic character, striking chromaticity and luminosity dynamics, as well as textural/anatomical/skin details more reliably lifelike than other models at a comparable step-count/resource expenditure. <br>
- Extending realism options under an unrevokable commercial license. <br>
<Gallery />
## Testing Space:
You may try out the **V2** checkpoint at [one of our LoRA gallery spaces](https://huggingface.co/spaces/AlekseyCalvin/soonfactory4), along with many of our trained LoRAs!<br>
## Bit of Model History + TOOL SHARES:
[Historic Color Soon® V.1](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) was fine-tuned by us from [HumbleMikey](https://civitai.com/user/humblemikey)'s [Pixelwave Schnell V.1](https://huggingface.co/mikeyandfriends/PixelWave_FLUX.1-schnell_01/) model which, in its turn, is a generalized base checkpoint trained from [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) by **Black Forest Labs**, consolidating (in comparison w/vanilla-base-**Schnell**) further inference speed improvements (more reliable results at 2-3 steps), whilst raising the overall quality and consistency standards across most aesthetic categories and at every step.<br>
This version, **Historic Color Soon® V.2** was created through merging into **V.1** a handful of LoRAs trained by us on the (fairly narrow) available range of realistic Flux checkpoint models that are exclusively **Schnell**-derived, so as to stay within the fairly open **Apache 2.0** licensing domain (which was among our reasons to do all this in the first place).<br>
**Historic Color Soon® V.1** is available [here](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) in both **Safetensors** (fp8) & **Diffusers** formats.<br>
To fine-tune **Flux**, try the dedicated [Flux Training Notebook by Ostris](https://github.com/ostris/ai-toolkit/blob/main/notebooks/FLUX_1_schnell_LoRA_Training.ipynb).<br>
**Ostris**' training adapter for **Schnell** is found here: [ostris/FLUX.1-schnell-training-adapter](https://huggingface.co/ostris/FLUX.1-schnell-training-adapter).<br>
To merge **Flux*** models and LoRAs, use the *'flux_merge_lora.py'* script from the sd3-branch & /networks (subfolder) of [Kohya-ss's sd-scripts git](https://github.com/kohya-ss/sd-scripts/tree/sd3).<br>
## Bit of Actual History:
**Prokudin-Gorsky**'s color photography technique would involve three photo-exposures, either simultaneous or sequential, using specialized color-spectrum filters (basically R.B.G.: red, blue, and green), rendering a subject/shot onto glass plates covered with light-emulsive mixture.<br>
The photographer's focus on refining the developer and filter quality, in tandem with his incessant and wide-ranging experimentation, and his artful optimizations of glass plates (generally unwieldly, esp. for color, and by the 1910's already becoming outmoded for B&W on-location shoots, though elsewise extra reliable) ultimately led him to produce a color photography oeuvre of much greater fidelity and vividness than achieved by most of his contemporaries.<br>
At the same time, the peculiarities of the photographer's method, coupled with his exceptionally hands-on execution thereof, would manifest in a range of idyosyncratic color, light, and motion artifacts common across the resulting prints.<br>
Seldom marring the image as a whole, and less grave than the weaknesses of some cp-emerging autochrome techniques, the warm color hazes & flares framing many of **Prokudin-Gorsky**'s prints constitute a kind of ephemeral signature.<br>
Alongside some of the more subtle chromatic, textural, and (in some measure) figural characteristics of his work, these auras have reliably imprinted themselves into this and other LoRAs and Models within our gallery of fine-tunes for Flux and StableDiffusion3.5, fine-tuned exclusively on non-synthetic (human-made and pre-curated) open-access data from iconic, influential, and/or otherwise compelling historical sources.<br>
We urge you to explore the works of **Prokudin-Gorsky** for yourself, at the wonderfully organized online [archive at this link](https://prokudin-gorsky.org/), featuring many hundreds of high quality downloadable scans of composite color photo prints from the photographer's original glass plate negatives, available at this site alongside relatively recent restorations of a substantial portion of the images. The original glass-plate negatives are currently held at and administrated by the Library of Congress in Washington, DC, USA. <br>
## Diffusers:<br>
To use `Historic Color SOON® V.2` with the 🧨 diffusers python library, first install or upgrade diffusers:<br>
```shell
pip install -U diffusers
```
Then you can use `FluxPipeline` to run the model:
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("AlekseyCalvin/HistoricColorSoonr_v2_FluxSchnell_Diffusers", torch_dtype=torch.bfloat16)
pipe.to("cuda")
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "HST style autochrome film photograph portrait of 1910 woman playing poker against a purple feathered dinosaur, the green-eyed woman has moderately blemished skin with visible lines and pores, she smiles, film grain, Kodachrome"
image = pipe(
prompt,
guidance_scale=1.2,
num_inference_steps=4,
max_sequence_length=256,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("hstcolor1.png")
```
To learn more check out the [diffusers](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) documentation.
<br>
Lastly, if you're into literature broadly and old modernist poetry specifically, check out our verse translations at [SILVER AGE POETS](https://www.SilverAgePoets.com/the-poets-and-their-stories)!
|
{"base_model": "black-forest-labs/FLUX.1-schnell", "language": ["en"], "library_name": "diffusers", "license": "apache-2.0", "pipeline_tag": "text-to-image", "tags": ["Flux", "FluxPipeline", "text-to-image", "flux schnell", "image-generation", "flux-diffusers", "diffusers", "photo", "realism"], "emoji": "🔜", "instance_prompt": "HST autochrome photo", "widget": [{"text": "HST style autochrome photo of a young woman playing poker against a blue-feathered dinosaur sitting across from her, moderately wrinkled blemished lined skin texture with pores", "output": {"url": "Hstv2r.png"}}, {"text": "(w/ our Mayakovsky LoRA) HST photo of Mayakovsky sleeping, seeing a dream wherein rice shoots bud on lush green fields, text \\MAYAKOVSKY SAW A DREAM\\", "output": {"url": "Hstv2Mayak.webp"}}, {"text": "HST style photo of a young woman playing a Telecaster electric guitar and singing the blues", "output": {"url": "Hstv2guitar.webp"}}, {"text": "hst style photo of an aging dark-haired woman playing guitar in an old Soviet apartment", "output": {"url": "Hstv2r3.png"}}, {"text": "hst style photo of a young dark-haired woman embracing a red-feathered dinosaur", "output": {"url": "HistoricI.png"}}, {"text": "hst style autochrome vintage color photo of gigantic Rosa Luxemburg walking over iced-over planet Earth", "output": {"url": "Hst2legs.png"}}]}
|
task
|
[
"TRANSLATION"
] | 45,400 |
henryscheible/rte_roberta-base_144_v2
|
henryscheible
| null |
[
"pytorch",
"generated_from_trainer",
"en",
"dataset:glue",
"license:mit",
"model-index",
"region:us"
] | 2023-01-18T20:37:28Z |
2023-01-18T20:50:18+00:00
| 0 | 0 |
---
datasets:
- glue
language:
- en
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: rte_roberta-base_144_v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- type: accuracy
value: 0.7256317689530686
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rte_roberta-base_144_v2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6194
- Accuracy: 0.7256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rte_roberta-base_144_v2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6194
- Accuracy: 0.7256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
{"datasets": ["glue"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "rte_roberta-base_144_v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE RTE", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.7256317689530686, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,401 |
Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF
|
Triangle104
|
image-text-to-text
|
[
"transformers",
"gguf",
"qwen",
"latex",
"vLM",
"Vision",
"Latex",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:prithivMLmods/LatexMind-2B-Codec",
"base_model:quantized:prithivMLmods/LatexMind-2B-Codec",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2025-02-02T01:40:08Z |
2025-02-02T01:41:41+00:00
| 5 | 0 |
---
base_model: prithivMLmods/LatexMind-2B-Codec
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- qwen
- latex
- vLM
- Vision
- Latex
- llama-cpp
- gguf-my-repo
---
# Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF
This model was converted to GGUF format from [`prithivMLmods/LatexMind-2B-Codec`](https://huggingface.co/prithivMLmods/LatexMind-2B-Codec) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/LatexMind-2B-Codec) for more details on the model.
---
The LatexMind-2B-Codec model is a fine-tuned version of Qwen2-VL-2B-Instruct, optimized for Optical Character Recognition (OCR), image-to-text conversion, and mathematical expression extraction with LaTeX formatting.
This model integrates a conversational approach with visual and textual
understanding to handle multi-modal tasks effectively.
Key Enhancements:
SoTA understanding of images with various resolutions & aspect ratios:
LatexMind-2B-Codec achieves state-of-the-art performance on visual
understanding benchmarks, including MathVista, DocVQA, RealWorldQA,
MTVQA, etc.
Advanced LaTeX extraction: The model specializes
in extracting structured mathematical expressions from images and
documents, converting them into LaTeX format for precise rendering and
further computation.
Understanding long-duration videos (20min+):
LatexMind-2B-Codec can process videos over 20 minutes long, enabling
high-quality video-based question answering, mathematical solution
explanation, and educational content creation.
Agent capabilities for automated operations:
With complex reasoning and decision-making abilities, the model can be
integrated with mobile devices, robots, and assistive technologies to
automate tasks based on visual and textual inputs.
Multilingual Support: To serve global users, in
addition to English and Chinese, the model supports text recognition
inside images across multiple languages, including European languages,
Japanese, Korean, Arabic, Vietnamese, etc.
This model is particularly effective in retrieving mathematical notations and equations
from scanned documents, whiteboard images, and handwritten notes,
ensuring accurate conversion to LaTeX code for further academic and
computational applications.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF --hf-file latexmind-2b-codec-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF --hf-file latexmind-2b-codec-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF --hf-file latexmind-2b-codec-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF --hf-file latexmind-2b-codec-q4_k_s.gguf -c 2048
```
| null |
Non_BioNLP
|
# Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF
This model was converted to GGUF format from [`prithivMLmods/LatexMind-2B-Codec`](https://huggingface.co/prithivMLmods/LatexMind-2B-Codec) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/LatexMind-2B-Codec) for more details on the model.
---
The LatexMind-2B-Codec model is a fine-tuned version of Qwen2-VL-2B-Instruct, optimized for Optical Character Recognition (OCR), image-to-text conversion, and mathematical expression extraction with LaTeX formatting.
This model integrates a conversational approach with visual and textual
understanding to handle multi-modal tasks effectively.
Key Enhancements:
SoTA understanding of images with various resolutions & aspect ratios:
LatexMind-2B-Codec achieves state-of-the-art performance on visual
understanding benchmarks, including MathVista, DocVQA, RealWorldQA,
MTVQA, etc.
Advanced LaTeX extraction: The model specializes
in extracting structured mathematical expressions from images and
documents, converting them into LaTeX format for precise rendering and
further computation.
Understanding long-duration videos (20min+):
LatexMind-2B-Codec can process videos over 20 minutes long, enabling
high-quality video-based question answering, mathematical solution
explanation, and educational content creation.
Agent capabilities for automated operations:
With complex reasoning and decision-making abilities, the model can be
integrated with mobile devices, robots, and assistive technologies to
automate tasks based on visual and textual inputs.
Multilingual Support: To serve global users, in
addition to English and Chinese, the model supports text recognition
inside images across multiple languages, including European languages,
Japanese, Korean, Arabic, Vietnamese, etc.
This model is particularly effective in retrieving mathematical notations and equations
from scanned documents, whiteboard images, and handwritten notes,
ensuring accurate conversion to LaTeX code for further academic and
computational applications.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF --hf-file latexmind-2b-codec-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF --hf-file latexmind-2b-codec-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF --hf-file latexmind-2b-codec-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/LatexMind-2B-Codec-Q4_K_S-GGUF --hf-file latexmind-2b-codec-q4_k_s.gguf -c 2048
```
|
{"base_model": "prithivMLmods/LatexMind-2B-Codec", "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "image-text-to-text", "tags": ["qwen", "latex", "vLM", "Vision", "Latex", "llama-cpp", "gguf-my-repo"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,403 |
vasista22/whisper-telugu-small
|
vasista22
|
automatic-speech-recognition
|
[
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"te",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2022-12-20T19:09:10Z |
2023-04-24T20:31:07+00:00
| 54 | 1 |
---
language:
- te
license: apache-2.0
metrics:
- wer
tags:
- whisper-event
model-index:
- name: Whisper Telugu Small - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: te_in
split: test
metrics:
- type: wer
value: 11.59
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Telugu Small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Telugu data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-telugu-small", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="te", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-telugu-small", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="te", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [CSTD IIIT-H ASR Corpus](https://asr.iiit.ac.in/cstd_ulca/)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#telugu-labelled-total-duration-is-102593-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Microsoft Speech Corpus (Indian Languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
- Babel ASR Corpus
Evaluation Data:
- [Microsoft Speech Corpus (Indian Languages) Test Set](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
- [OpenSLR](https://www.openslr.org/66/)
- Babel Test Set
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7e-05
- train_batch_size: 48
- eval_batch_size: 32
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15000
- training_steps: 26856 (terminated upon convergence. Initially set to 89520 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Telugu Small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Telugu data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-telugu-small", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="te", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-telugu-small", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="te", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [CSTD IIIT-H ASR Corpus](https://asr.iiit.ac.in/cstd_ulca/)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#telugu-labelled-total-duration-is-102593-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Microsoft Speech Corpus (Indian Languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
- Babel ASR Corpus
Evaluation Data:
- [Microsoft Speech Corpus (Indian Languages) Test Set](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
- [OpenSLR](https://www.openslr.org/66/)
- Babel Test Set
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7e-05
- train_batch_size: 48
- eval_batch_size: 32
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15000
- training_steps: 26856 (terminated upon convergence. Initially set to 89520 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
|
{"language": ["te"], "license": "apache-2.0", "metrics": ["wer"], "tags": ["whisper-event"], "model-index": [{"name": "Whisper Telugu Small - Vasista Sai Lodagala", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "google/fleurs", "type": "google/fleurs", "config": "te_in", "split": "test"}, "metrics": [{"type": "wer", "value": 11.59, "name": "WER"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,404 |
Dhahlan2000/Translation-GPT-v4
|
Dhahlan2000
|
text2text-generation
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Dhahlan2000/Translation-GPT-v3",
"base_model:finetune:Dhahlan2000/Translation-GPT-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-21T11:29:05Z |
2024-05-21T11:30:26+00:00
| 4 | 0 |
---
base_model: Dhahlan2000/Translation-GPT-v3
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Translation-GPT-v4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Translation-GPT-v4
This model is a fine-tuned version of [Dhahlan2000/Translation-GPT-v3](https://huggingface.co/Dhahlan2000/Translation-GPT-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8506
- Validation Loss: 2.2484
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.0246 | 2.3872 | 0 |
| 2.8506 | 2.2484 | 1 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.17.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Translation-GPT-v4
This model is a fine-tuned version of [Dhahlan2000/Translation-GPT-v3](https://huggingface.co/Dhahlan2000/Translation-GPT-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8506
- Validation Loss: 2.2484
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.0246 | 2.3872 | 0 |
| 2.8506 | 2.2484 | 1 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.17.0
- Tokenizers 0.19.1
|
{"base_model": "Dhahlan2000/Translation-GPT-v3", "license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Translation-GPT-v4", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,405 |
kathleenge/tsdae-bert-base-uncased
|
kathleenge
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:100000",
"loss:DenoisingAutoEncoderLoss",
"arxiv:1908.10084",
"arxiv:2104.06979",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-19T03:10:02Z |
2024-07-19T03:10:51+00:00
| 7 | 0 |
---
base_model: google-bert/bert-base-uncased
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:100000
- loss:DenoisingAutoEncoderLoss
widget:
- source_sentence: 1109/icnsurv
sentences:
- 1109/icnsurv
- A cost function is needed to assign a performance metric value to a particular
test run
- Aircraft OperationsFuture aircraft will sense, control, communicate, and navigate
with increasing levels of autonomy, enabling new concepts in air traffic management
- source_sentence: Table 1 of and to well as the median taxi from STBO KDFW
sentences:
- Table 1 Metrics of accuracy, median and MAD of residuals as compared to STBO predictions,
as well as the median taxi time from STBO for KDFW and KCLT airports
- ', IEEE, 2005, pp'
- 'RESULTS: EFFICIENCY ANALYSIS'
- source_sentence: gate time to known
sentences:
- 3FIVE INPUT VARIABLESParameterDescriptionHead windHead WindGust windGust WindCeiling_ftForecast
CeilingVis_ftForecast VisibilityAct_Land_Wgt Actual Landing Weightfive parameters
listed in
- Instead, gate departure time was assumed to be known
- The proof is very similar to that presented for the NP-completeness of ASP, and
is based on reduction from PLANAR-P3( 6), hence we simply provide the main idea
of the proof
- source_sentence: ', Hough" Pattern Recognition, Vol'
sentences:
- 9 Station Keeping scores
- "\t\tAGARD CD-410"
- ', "Generalizing the Hough Transform to Detect Arbitrary Shapes," Pattern Recognition,
Vol'
- source_sentence: Airlines often ferry from locations fuel prices
sentences:
- Scheduler Inputs and Order of ConsiderationThe surface model provides EOBT, UOBT,
UTOT and other detailed flight-specific modeled input
- "\t\t\tKeithWichman"
- Airlines often ferry fuel from locations where fuel prices are cheapest
---
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("kathleenge/tsdae-bert-base-uncased")
# Run inference
sentences = [
'Airlines often ferry from locations fuel prices',
'Airlines often ferry fuel from locations where fuel prices are cheapest',
'\t\t\tKeithWichman',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 100,000 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 10.95 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 23.39 tokens</li><li>max: 239 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------|
| <code>selected and reviewed for value current on metroplex</code> | <code>The literature was selected and reviewed for its value to the current research on metroplex operations</code> |
| <code>and</code> | <code>, and Dulchinos, V</code> |
| <code>,</code> | <code>, Atkins, S</code> |
* Loss: [<code>DenoisingAutoEncoderLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#denoisingautoencoderloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 0.04 | 500 | 7.3777 |
| 0.08 | 1000 | 6.9771 |
| 0.12 | 1500 | 6.8481 |
| 0.16 | 2000 | 6.7737 |
| 0.2 | 2500 | 6.6935 |
| 0.24 | 3000 | 6.6264 |
| 0.28 | 3500 | 6.5918 |
| 0.32 | 4000 | 6.5504 |
| 0.36 | 4500 | 6.4805 |
| 0.4 | 5000 | 6.4539 |
| 0.44 | 5500 | 6.4242 |
| 0.48 | 6000 | 6.4017 |
| 0.52 | 6500 | 6.3808 |
| 0.56 | 7000 | 6.3595 |
| 0.6 | 7500 | 6.3174 |
| 0.64 | 8000 | 6.2911 |
| 0.68 | 8500 | 6.2917 |
| 0.72 | 9000 | 6.2555 |
| 0.76 | 9500 | 6.2314 |
| 0.8 | 10000 | 6.2223 |
| 0.84 | 10500 | 6.1852 |
| 0.88 | 11000 | 6.2067 |
| 0.92 | 11500 | 6.1562 |
| 0.96 | 12000 | 6.1563 |
| 1.0 | 12500 | 6.092 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### DenoisingAutoEncoderLoss
```bibtex
@inproceedings{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
pages = "671--688",
url = "https://arxiv.org/abs/2104.06979",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("kathleenge/tsdae-bert-base-uncased")
# Run inference
sentences = [
'Airlines often ferry from locations fuel prices',
'Airlines often ferry fuel from locations where fuel prices are cheapest',
'\t\t\tKeithWichman',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 100,000 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 10.95 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 23.39 tokens</li><li>max: 239 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------|
| <code>selected and reviewed for value current on metroplex</code> | <code>The literature was selected and reviewed for its value to the current research on metroplex operations</code> |
| <code>and</code> | <code>, and Dulchinos, V</code> |
| <code>,</code> | <code>, Atkins, S</code> |
* Loss: [<code>DenoisingAutoEncoderLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#denoisingautoencoderloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 0.04 | 500 | 7.3777 |
| 0.08 | 1000 | 6.9771 |
| 0.12 | 1500 | 6.8481 |
| 0.16 | 2000 | 6.7737 |
| 0.2 | 2500 | 6.6935 |
| 0.24 | 3000 | 6.6264 |
| 0.28 | 3500 | 6.5918 |
| 0.32 | 4000 | 6.5504 |
| 0.36 | 4500 | 6.4805 |
| 0.4 | 5000 | 6.4539 |
| 0.44 | 5500 | 6.4242 |
| 0.48 | 6000 | 6.4017 |
| 0.52 | 6500 | 6.3808 |
| 0.56 | 7000 | 6.3595 |
| 0.6 | 7500 | 6.3174 |
| 0.64 | 8000 | 6.2911 |
| 0.68 | 8500 | 6.2917 |
| 0.72 | 9000 | 6.2555 |
| 0.76 | 9500 | 6.2314 |
| 0.8 | 10000 | 6.2223 |
| 0.84 | 10500 | 6.1852 |
| 0.88 | 11000 | 6.2067 |
| 0.92 | 11500 | 6.1562 |
| 0.96 | 12000 | 6.1563 |
| 1.0 | 12500 | 6.092 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### DenoisingAutoEncoderLoss
```bibtex
@inproceedings{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
pages = "671--688",
url = "https://arxiv.org/abs/2104.06979",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "google-bert/bert-base-uncased", "datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:100000", "loss:DenoisingAutoEncoderLoss"], "widget": [{"source_sentence": "1109/icnsurv", "sentences": ["1109/icnsurv", "A cost function is needed to assign a performance metric value to a particular test run", "Aircraft OperationsFuture aircraft will sense, control, communicate, and navigate with increasing levels of autonomy, enabling new concepts in air traffic management"]}, {"source_sentence": "Table 1 of and to well as the median taxi from STBO KDFW", "sentences": ["Table 1 Metrics of accuracy, median and MAD of residuals as compared to STBO predictions, as well as the median taxi time from STBO for KDFW and KCLT airports", ", IEEE, 2005, pp", "RESULTS: EFFICIENCY ANALYSIS"]}, {"source_sentence": "gate time to known", "sentences": ["3FIVE INPUT VARIABLESParameterDescriptionHead windHead WindGust windGust WindCeiling_ftForecast CeilingVis_ftForecast VisibilityAct_Land_Wgt Actual Landing Weightfive parameters listed in", "Instead, gate departure time was assumed to be known", "The proof is very similar to that presented for the NP-completeness of ASP, and is based on reduction from PLANAR-P3( 6), hence we simply provide the main idea of the proof"]}, {"source_sentence": ", Hough\" Pattern Recognition, Vol", "sentences": ["9 Station Keeping scores", "\t\tAGARD CD-410", ", \"Generalizing the Hough Transform to Detect Arbitrary Shapes,\" Pattern Recognition, Vol"]}, {"source_sentence": "Airlines often ferry from locations fuel prices", "sentences": ["Scheduler Inputs and Order of ConsiderationThe surface model provides EOBT, UOBT, UTOT and other detailed flight-specific modeled input", "\t\t\tKeithWichman", "Airlines often ferry fuel from locations where fuel prices are cheapest"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,406 |
emen/distilbert-base-uncased-finetuned-emotion
|
emen
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-06-30T11:35:19Z |
2022-06-30T12:17:01+00:00
| 96 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.9295
name: Accuracy
- type: f1
value: 0.9297561758557029
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2181
- Accuracy: 0.9295
- F1: 0.9298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8495 | 1.0 | 250 | 0.3141 | 0.9085 | 0.9060 |
| 0.2511 | 2.0 | 500 | 0.2181 | 0.9295 | 0.9298 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2181
- Accuracy: 0.9295
- F1: 0.9298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8495 | 1.0 | 250 | 0.3141 | 0.9085 | 0.9060 |
| 0.2511 | 2.0 | 500 | 0.2181 | 0.9295 | 0.9298 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9295, "name": "Accuracy"}, {"type": "f1", "value": 0.9297561758557029, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,407 |
AventIQ-AI/opus-mt-en-roa_multilanguageTranslation
|
AventIQ-AI
| null |
[
"safetensors",
"marian",
"region:us"
] | 2025-02-19T07:16:46Z |
2025-02-20T10:25:29+00:00
| 49 | 3 |
---
{}
---
# Model Overview
This model is a fine-tuned version of the Helsinki-NLP OPUS-MT model for multiple language pairs. It has been fine-tuned on the Tatoeba dataset for the following language pairs:
English to Marathi (en-mr)
Esperanto to Dutch (eo-nl)
Spanish to Portuguese (es-pt)
French to Russian (fr-ru)
Spanish to Galician (es-gl)
The model supports sequence-to-sequence translation and has been optimized for performance using FP16 quantization.
# Model Details
```
Base Model: Helsinki-NLP/opus-mt-en-roa
Training Dataset: Tatoeba dataset
Fine-tuned Language Pairs: en-mr, eo-nl, es-pt, fr-ru, es-gl
Evaluation Metric: BLEU Score (using sacreBLEU)
Training Framework: Hugging Face Transformers
Training Configuration
Optimizer: AdamW
Learning Rate: 2e-5
Batch Size: 16 (per device)
Weight Decay: 0.01
Epochs: 3
Precision: FP32 (initial training), converted to FP16 for inference
```
Quantization and FP16 Conversion
To improve inference efficiency, models were converted to FP16:
# Inference Example
```
python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
model = AutoModelForSeq2SeqLM.from_pretrained("fine_tuned_models_fp16/en-mr/final/", torch_dtype=torch.float16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("fine_tuned_models_fp16/en-mr/final/")
inputs = tokenizer("Hello, how are you?", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# Usage
The models can be used for translation tasks in various NLP applications, including chatbots, document translation, and real-time communication.
# Limitations
May not generalize well for domain-specific text.
FP16 quantization may lead to minor loss in precision.
Translation accuracy depends on the dataset quality.
# Citation
If you use this model, please cite the original OPUS-MT paper and acknowledge the fine-tuning process conducted using the Tatoeba dataset.
| null |
Non_BioNLP
|
# Model Overview
This model is a fine-tuned version of the Helsinki-NLP OPUS-MT model for multiple language pairs. It has been fine-tuned on the Tatoeba dataset for the following language pairs:
English to Marathi (en-mr)
Esperanto to Dutch (eo-nl)
Spanish to Portuguese (es-pt)
French to Russian (fr-ru)
Spanish to Galician (es-gl)
The model supports sequence-to-sequence translation and has been optimized for performance using FP16 quantization.
# Model Details
```
Base Model: Helsinki-NLP/opus-mt-en-roa
Training Dataset: Tatoeba dataset
Fine-tuned Language Pairs: en-mr, eo-nl, es-pt, fr-ru, es-gl
Evaluation Metric: BLEU Score (using sacreBLEU)
Training Framework: Hugging Face Transformers
Training Configuration
Optimizer: AdamW
Learning Rate: 2e-5
Batch Size: 16 (per device)
Weight Decay: 0.01
Epochs: 3
Precision: FP32 (initial training), converted to FP16 for inference
```
Quantization and FP16 Conversion
To improve inference efficiency, models were converted to FP16:
# Inference Example
```
python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
model = AutoModelForSeq2SeqLM.from_pretrained("fine_tuned_models_fp16/en-mr/final/", torch_dtype=torch.float16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("fine_tuned_models_fp16/en-mr/final/")
inputs = tokenizer("Hello, how are you?", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# Usage
The models can be used for translation tasks in various NLP applications, including chatbots, document translation, and real-time communication.
# Limitations
May not generalize well for domain-specific text.
FP16 quantization may lead to minor loss in precision.
Translation accuracy depends on the dataset quality.
# Citation
If you use this model, please cite the original OPUS-MT paper and acknowledge the fine-tuning process conducted using the Tatoeba dataset.
|
{}
|
task
|
[
"TRANSLATION"
] | 45,409 |
grakshit/sentiment_imdb
|
grakshit
|
text-classification
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-25T22:23:34Z |
2023-09-25T23:05:23+00:00
| 9 | 0 |
---
base_model: roberta-base
datasets:
- imdb
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: sentiment_imdb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.95576
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_imdb
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1831
- Accuracy: 0.9558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1838 | 1.0 | 1563 | 0.2139 | 0.9333 |
| 0.134 | 2.0 | 3126 | 0.1831 | 0.9558 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_imdb
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1831
- Accuracy: 0.9558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1838 | 1.0 | 1563 | 0.2139 | 0.9333 |
| 0.134 | 2.0 | 3126 | 0.1831 | 0.9558 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "roberta-base", "datasets": ["imdb"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "sentiment_imdb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.95576, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,410 |
nextai-team/Moe-4x7b-reason-code-qa
|
nextai-team
|
text-generation
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"code",
"QA",
"reasoning",
"maths",
"sql",
"mistral",
"zephyr",
"codellama",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-01-28T21:01:13Z |
2024-02-08T16:24:30+00:00
| 97 | 5 |
---
language:
- en
library_name: transformers
license: apache-2.0
tags:
- code
- QA
- reasoning
- mixtral
- maths
- sql
- mistral
- zephyr
- codellama
---
Model Details
Model Name: Moe-4x7b-reason-code-qa Publisher: nextai-team Model Type: Question Answering & Code Generation Architecture: Mixture of Experts (MoE) Model Size: 4x7 billion parameters
Overview
is an advanced AI model designed by the nextai-team for the purpose of enhancing question answering and code generation capabilities. Building upon the foundation of its predecessor, Moe-4x7b-reason-code-qa, this iteration introduces refined mechanisms and expanded training datasets to deliver more precise and contextually relevant responses.
How to Use
```from transformers import AutoTokenizer
import transformers
import torch
model = "nextai-team/Moe-4x7b-reason-code-qa"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
device_map="auto",
model_kwargs={"torch_dtype": torch.float16},
)
def generate_resposne(query):
messages = [{"role": "user", "content": query}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
return outputs[0]['generated_text']
response = generate_resposne("How to learn coding .Please provide a step by step procedure")
print(response)
```
Intended Use
This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to:
Automated coding assistance Technical support bots Educational tools for learning programming Enhancing code review processes
Model Architecture employs a Mixture of Experts (MoE) architecture, which allows it to efficiently manage its vast number of parameters for specialized tasks. This architecture facilitates the model's ability to discern subtle nuances in programming languages and natural language queries, leading to more accurate code generation and question answering performance.
Training Data The model has been trained on a diverse and extensive corpus comprising technical documentation, open-source code repositories, Stack Overflow questions and answers, and other programming-related texts. Special attention has been given to ensure a wide range of programming languages and frameworks are represented in the training data to enhance the model's versatility.
Performance demonstrates significant improvements in accuracy and relevance over its predecessor, particularly in complex coding scenarios and detailed technical queries. Benchmarks and performance metrics can be provided upon request.
Limitations and Biases
While represents a leap forward in AI-assisted coding and technical Q&A, it is not without limitations. The model may exhibit biases present in its training data, and its performance can vary based on the specificity and context of the input queries. Users are encouraged to critically assess the model's output and consider it as one of several tools in the decision-making process.
Ethical Considerations
We are committed to ethical AI development and urge users to employ Moe-4x7b-reason-code-qa responsibly. This includes but is not limited to avoiding the generation of harmful or unsafe code, respecting copyright and intellectual property rights, and being mindful of privacy concerns when inputting sensitive information into the model.
Demo API:
app.nextai.co.in
Usage Instructions
For detailed instructions on how to integrate and utilize Moe-4x7b-reason-code-qa in your projects, please refer to our GitHub repository and Hugging Face documentation.
Citation If you use Moe-4x7b-reason-code-qa in your research or application, please cite it as follows:
@misc{nextai2024moe4x7b, title={Moe-4x7b-reason-code-qa: Enhancing Question Answering and Code Generation with Mixture of Experts}, author={NextAI Team}, year={2024}, publisher={Hugging Face} }
| null |
Non_BioNLP
|
Model Details
Model Name: Moe-4x7b-reason-code-qa Publisher: nextai-team Model Type: Question Answering & Code Generation Architecture: Mixture of Experts (MoE) Model Size: 4x7 billion parameters
Overview
is an advanced AI model designed by the nextai-team for the purpose of enhancing question answering and code generation capabilities. Building upon the foundation of its predecessor, Moe-4x7b-reason-code-qa, this iteration introduces refined mechanisms and expanded training datasets to deliver more precise and contextually relevant responses.
How to Use
```from transformers import AutoTokenizer
import transformers
import torch
model = "nextai-team/Moe-4x7b-reason-code-qa"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
device_map="auto",
model_kwargs={"torch_dtype": torch.float16},
)
def generate_resposne(query):
messages = [{"role": "user", "content": query}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
return outputs[0]['generated_text']
response = generate_resposne("How to learn coding .Please provide a step by step procedure")
print(response)
```
Intended Use
This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to:
Automated coding assistance Technical support bots Educational tools for learning programming Enhancing code review processes
Model Architecture employs a Mixture of Experts (MoE) architecture, which allows it to efficiently manage its vast number of parameters for specialized tasks. This architecture facilitates the model's ability to discern subtle nuances in programming languages and natural language queries, leading to more accurate code generation and question answering performance.
Training Data The model has been trained on a diverse and extensive corpus comprising technical documentation, open-source code repositories, Stack Overflow questions and answers, and other programming-related texts. Special attention has been given to ensure a wide range of programming languages and frameworks are represented in the training data to enhance the model's versatility.
Performance demonstrates significant improvements in accuracy and relevance over its predecessor, particularly in complex coding scenarios and detailed technical queries. Benchmarks and performance metrics can be provided upon request.
Limitations and Biases
While represents a leap forward in AI-assisted coding and technical Q&A, it is not without limitations. The model may exhibit biases present in its training data, and its performance can vary based on the specificity and context of the input queries. Users are encouraged to critically assess the model's output and consider it as one of several tools in the decision-making process.
Ethical Considerations
We are committed to ethical AI development and urge users to employ Moe-4x7b-reason-code-qa responsibly. This includes but is not limited to avoiding the generation of harmful or unsafe code, respecting copyright and intellectual property rights, and being mindful of privacy concerns when inputting sensitive information into the model.
Demo API:
app.nextai.co.in
Usage Instructions
For detailed instructions on how to integrate and utilize Moe-4x7b-reason-code-qa in your projects, please refer to our GitHub repository and Hugging Face documentation.
Citation If you use Moe-4x7b-reason-code-qa in your research or application, please cite it as follows:
@misc{nextai2024moe4x7b, title={Moe-4x7b-reason-code-qa: Enhancing Question Answering and Code Generation with Mixture of Experts}, author={NextAI Team}, year={2024}, publisher={Hugging Face} }
|
{"language": ["en"], "library_name": "transformers", "license": "apache-2.0", "tags": ["code", "QA", "reasoning", "mixtral", "maths", "sql", "mistral", "zephyr", "codellama"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,411 |
marklim100/test-model-v3
|
marklim100
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-08-08T09:57:49Z |
2023-08-08T09:58:26+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# marklim100/test-model-v3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("marklim100/test-model-v3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# marklim100/test-model-v3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("marklim100/test-model-v3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,412 |
snumin44/medical-biencoder-ko-bert-context
|
snumin44
|
feature-extraction
|
[
"safetensors",
"bert",
"medical",
"feature-extraction",
"ko",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:mit",
"region:us"
] | 2024-08-27T13:31:18Z |
2025-02-11T13:26:13+00:00
| 42 | 1 |
---
base_model:
- klue/bert-base
language:
- ko
license: mit
pipeline_tag: feature-extraction
tags:
- medical
---
# 🍊 Korean Medical DPR(Dense Passage Retrieval)
## 1. Intro
**의료 분야**에서 사용할 수 있는 Bi-Encoder 구조의 검색 모델입니다.
한·영 혼용체의 의료 기록을 처리하기 위해 **SapBERT-KO-EN** 을 베이스 모델로 이용했습니다.
질문은 Question Encoder로, 텍스트는 Context Encoder를 이용해 인코딩합니다.
- Question Encoder : [https://huggingface.co/snumin44/medical-biencoder-ko-bert-question](https://huggingface.co/snumin44/medical-biencoder-ko-bert-question)
(※ 이 모델은 AI Hub의 [초거대 AI 헬스케어 질의 응답 데이터](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71762)로 학습한 모델입니다.)
## 2. Model
**(1) Self Alignment Pretraining (SAP)**
한국 의료 기록은 **한·영 혼용체**로 쓰여, 영어 용어도 인식할 수 있는 모델이 필요합니다.
Multi Similarity Loss를 이용해 **동일한 코드의 용어** 간에 높은 유사도를 갖도록 학습했습니다.
```
예) C3843080 || 고혈압 질환
C3843080 || Hypertension
C3843080 || High Blood Pressure
C3843080 || HTN
C3843080 || HBP
```
- SapBERT-KO-EN : [https://huggingface.co/snumin44/sap-bert-ko-en](https://huggingface.co/snumin44/sap-bert-ko-en)
- Github : [https://github.com/snumin44/SapBERT-KO-EN](https://github.com/millet04/SapBERT-KO-EN)
**(2) Dense Passage Retrieval (DPR)**
SapBERT-KO-EN을 검색 모델로 만들기 위해 추가적인 Fine-tuning을 해야 합니다.
Bi-Encoder 구조로 질의와 텍스트의 유사도를 계산하는 DPR 방식으로 Fine-tuning 했습니다.
다음과 같이 기존의 데이터 셋에 **한·영 혼용체 샘플을 증강**한 데이터 셋을 사용했습니다.
```
예) 한국어 병명: 고혈압
영어 병명: Hypertenstion
질의 (원본): 아버지가 고혈압인데 그게 뭔지 모르겠어. 고혈압이 뭔지 설명좀 해줘.
질의 (증강): 아버지가 Hypertenstion 인데 그게 뭔지 모르겠어. Hypertenstion 이 뭔지 설명좀 해줘.
```
- Github : [https://github.com/millet04/DPR-KO](https://github.com/millet04/DPR-KO)
## 3. Training
**(1) Self Alignment Pretraining (SAP)**
SapBERT-KO-EN 학습에 활용한 베이스 모델 및 하이퍼 파라미터는 다음과 같습니다.
한·영 의료 용어를 수록한 의료 용어 사전인 **KOSTOM**을 학습 데이터로 사용했습니다.
- Model : klue/bert-base
- Dataset : **KOSTOM**
- Epochs : 1
- Batch Size : 64
- Max Length : 64
- Dropout : 0.1
- Pooler : 'cls'
- Eval Step : 100
- Threshold : 0.8
- Scale Positive Sample : 1
- Scale Negative Sample : 60
**(2) Dense Passage Retrieval (DPR)**
Fine-tuning에 활용한 베이스 모델 및 하이퍼 파라미터는 다음과 같습니다.
- Model : SapBERT-KO-EN(klue/bert-base)
- Dataset : **초거대 AI 헬스케어 질의 응답 데이터(AI Hub)**
- Epochs : 10
- Batch Size : 64
- Dropout : 0.1
- Pooler : 'cls'
## 4. Example
이 모델은 Context를 인코딩하는 모델로, Question 모델과 함께 사용해야 합니다.
동일한 질병에 관한 질문과 텍스트가 높은 유사도를 보인다는 사실을 확인할 수 있습니다.
(※ 아래 코드의 예시는 ChatGPT를 이용해 생성한 의료 텍스트입니다.)
(※ 학습 데이터의 특성 상 예시 보다 정제된 텍스트에 대해 더 잘 작동합니다.)
```python
import numpy as np
from transformers import AutoModel, AutoTokenizer
# Question Model
q_model_path = 'snumin44/medical-biencoder-ko-bert-question'
q_model = AutoModel.from_pretrained(q_model_path)
q_tokenizer = AutoTokenizer.from_pretrained(q_model_path)
# Context Model
c_model_path = 'snumin44/medical-biencoder-ko-bert-context'
c_model = AutoModel.from_pretrained(c_model_path)
c_tokenizer = AutoTokenizer.from_pretrained(c_model_path)
query = 'high blood pressure 처방 사례'
targets = [
"""고혈압 진단.
환자 상담 및 생활습관 교정 권고. 저염식, 규칙적인 운동, 금연, 금주 지시.
환자 재방문. 혈압: 150/95mmHg. 약물치료 시작. Amlodipine 5mg 1일 1회 처방.""",
"""응급실 도착 후 위 내시경 진행.
소견: Gastric ulcer에서 Forrest IIb 관찰됨. 출혈은 소량의 삼출성 출혈 형태.
처치: 에피네프린 주사로 출혈 감소 확인. Hemoclip 2개로 출혈 부위 클리핑하여 지혈 완료.""",
"""혈중 높은 지방 수치 및 지방간 소견.
다발성 gallstones 확인. 증상 없을 경우 경과 관찰 권장.
우측 renal cyst, 양성 가능성 높으며 추가적인 처치 불필요 함."""
]
query_feature = q_tokenizer(query, return_tensors='pt')
query_outputs = q_model(**query_feature, return_dict=True)
query_embeddings = query_outputs.pooler_output.detach().numpy().squeeze()
def cos_sim(A, B):
return np.dot(A, B) / (np.linalg.norm(A) * np.linalg.norm(B))
for idx, target in enumerate(targets):
target_feature = c_tokenizer(target, return_tensors='pt')
target_outputs = c_model(**target_feature, return_dict=True)
target_embeddings = target_outputs.pooler_output.detach().numpy().squeeze()
similarity = cos_sim(query_embeddings, target_embeddings)
print(f"Similarity between query and target {idx}: {similarity:.4f}")
```
```
Similarity between query and target 0: 0.2674
Similarity between query and target 1: 0.0416
Similarity between query and target 2: 0.0476
```
## Citing
```
@inproceedings{liu2021self,
title={Self-Alignment Pretraining for Biomedical Entity Representations},
author={Liu, Fangyu and Shareghi, Ehsan and Meng, Zaiqiao and Basaldella, Marco and Collier, Nigel},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
pages={4228--4238},
month = jun,
year={2021}
}
@article{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih},
journal={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year={2020}
}
```
| null |
BioNLP
|
# 🍊 Korean Medical DPR(Dense Passage Retrieval)
## 1. Intro
**의료 분야**에서 사용할 수 있는 Bi-Encoder 구조의 검색 모델입니다.
한·영 혼용체의 의료 기록을 처리하기 위해 **SapBERT-KO-EN** 을 베이스 모델로 이용했습니다.
질문은 Question Encoder로, 텍스트는 Context Encoder를 이용해 인코딩합니다.
- Question Encoder : [https://huggingface.co/snumin44/medical-biencoder-ko-bert-question](https://huggingface.co/snumin44/medical-biencoder-ko-bert-question)
(※ 이 모델은 AI Hub의 [초거대 AI 헬스케어 질의 응답 데이터](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71762)로 학습한 모델입니다.)
## 2. Model
**(1) Self Alignment Pretraining (SAP)**
한국 의료 기록은 **한·영 혼용체**로 쓰여, 영어 용어도 인식할 수 있는 모델이 필요합니다.
Multi Similarity Loss를 이용해 **동일한 코드의 용어** 간에 높은 유사도를 갖도록 학습했습니다.
```
예) C3843080 || 고혈압 질환
C3843080 || Hypertension
C3843080 || High Blood Pressure
C3843080 || HTN
C3843080 || HBP
```
- SapBERT-KO-EN : [https://huggingface.co/snumin44/sap-bert-ko-en](https://huggingface.co/snumin44/sap-bert-ko-en)
- Github : [https://github.com/snumin44/SapBERT-KO-EN](https://github.com/millet04/SapBERT-KO-EN)
**(2) Dense Passage Retrieval (DPR)**
SapBERT-KO-EN을 검색 모델로 만들기 위해 추가적인 Fine-tuning을 해야 합니다.
Bi-Encoder 구조로 질의와 텍스트의 유사도를 계산하는 DPR 방식으로 Fine-tuning 했습니다.
다음과 같이 기존의 데이터 셋에 **한·영 혼용체 샘플을 증강**한 데이터 셋을 사용했습니다.
```
예) 한국어 병명: 고혈압
영어 병명: Hypertenstion
질의 (원본): 아버지가 고혈압인데 그게 뭔지 모르겠어. 고혈압이 뭔지 설명좀 해줘.
질의 (증강): 아버지가 Hypertenstion 인데 그게 뭔지 모르겠어. Hypertenstion 이 뭔지 설명좀 해줘.
```
- Github : [https://github.com/millet04/DPR-KO](https://github.com/millet04/DPR-KO)
## 3. Training
**(1) Self Alignment Pretraining (SAP)**
SapBERT-KO-EN 학습에 활용한 베이스 모델 및 하이퍼 파라미터는 다음과 같습니다.
한·영 의료 용어를 수록한 의료 용어 사전인 **KOSTOM**을 학습 데이터로 사용했습니다.
- Model : klue/bert-base
- Dataset : **KOSTOM**
- Epochs : 1
- Batch Size : 64
- Max Length : 64
- Dropout : 0.1
- Pooler : 'cls'
- Eval Step : 100
- Threshold : 0.8
- Scale Positive Sample : 1
- Scale Negative Sample : 60
**(2) Dense Passage Retrieval (DPR)**
Fine-tuning에 활용한 베이스 모델 및 하이퍼 파라미터는 다음과 같습니다.
- Model : SapBERT-KO-EN(klue/bert-base)
- Dataset : **초거대 AI 헬스케어 질의 응답 데이터(AI Hub)**
- Epochs : 10
- Batch Size : 64
- Dropout : 0.1
- Pooler : 'cls'
## 4. Example
이 모델은 Context를 인코딩하는 모델로, Question 모델과 함께 사용해야 합니다.
동일한 질병에 관한 질문과 텍스트가 높은 유사도를 보인다는 사실을 확인할 수 있습니다.
(※ 아래 코드의 예시는 ChatGPT를 이용해 생성한 의료 텍스트입니다.)
(※ 학습 데이터의 특성 상 예시 보다 정제된 텍스트에 대해 더 잘 작동합니다.)
```python
import numpy as np
from transformers import AutoModel, AutoTokenizer
# Question Model
q_model_path = 'snumin44/medical-biencoder-ko-bert-question'
q_model = AutoModel.from_pretrained(q_model_path)
q_tokenizer = AutoTokenizer.from_pretrained(q_model_path)
# Context Model
c_model_path = 'snumin44/medical-biencoder-ko-bert-context'
c_model = AutoModel.from_pretrained(c_model_path)
c_tokenizer = AutoTokenizer.from_pretrained(c_model_path)
query = 'high blood pressure 처방 사례'
targets = [
"""고혈압 진단.
환자 상담 및 생활습관 교정 권고. 저염식, 규칙적인 운동, 금연, 금주 지시.
환자 재방문. 혈압: 150/95mmHg. 약물치료 시작. Amlodipine 5mg 1일 1회 처방.""",
"""응급실 도착 후 위 내시경 진행.
소견: Gastric ulcer에서 Forrest IIb 관찰됨. 출혈은 소량의 삼출성 출혈 형태.
처치: 에피네프린 주사로 출혈 감소 확인. Hemoclip 2개로 출혈 부위 클리핑하여 지혈 완료.""",
"""혈중 높은 지방 수치 및 지방간 소견.
다발성 gallstones 확인. 증상 없을 경우 경과 관찰 권장.
우측 renal cyst, 양성 가능성 높으며 추가적인 처치 불필요 함."""
]
query_feature = q_tokenizer(query, return_tensors='pt')
query_outputs = q_model(**query_feature, return_dict=True)
query_embeddings = query_outputs.pooler_output.detach().numpy().squeeze()
def cos_sim(A, B):
return np.dot(A, B) / (np.linalg.norm(A) * np.linalg.norm(B))
for idx, target in enumerate(targets):
target_feature = c_tokenizer(target, return_tensors='pt')
target_outputs = c_model(**target_feature, return_dict=True)
target_embeddings = target_outputs.pooler_output.detach().numpy().squeeze()
similarity = cos_sim(query_embeddings, target_embeddings)
print(f"Similarity between query and target {idx}: {similarity:.4f}")
```
```
Similarity between query and target 0: 0.2674
Similarity between query and target 1: 0.0416
Similarity between query and target 2: 0.0476
```
## Citing
```
@inproceedings{liu2021self,
title={Self-Alignment Pretraining for Biomedical Entity Representations},
author={Liu, Fangyu and Shareghi, Ehsan and Meng, Zaiqiao and Basaldella, Marco and Collier, Nigel},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
pages={4228--4238},
month = jun,
year={2021}
}
@article{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih},
journal={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year={2020}
}
```
|
{"base_model": ["klue/bert-base"], "language": ["ko"], "license": "mit", "pipeline_tag": "feature-extraction", "tags": ["medical"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,414 |
rambodazimi/bert-base-uncased-finetuned-FFT-STSB
|
rambodazimi
| null |
[
"safetensors",
"bert",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-08-16T04:43:36Z |
2024-08-16T04:47:01+00:00
| 4 | 0 |
---
datasets:
- glue
license: apache-2.0
metrics:
- pearsonr
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-FFT-STSB
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: stsb
metrics:
- type: pearsonr
value: 0.884685594642135
name: Pearsonr
- type: spearmanr
value: 0.8793392857750939
name: Spearmanr
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-fft-stsb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Pearsonr: 0.8847
- Spearmanr: 0.8793
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- weight_decay: 0.01
- num_epochs: 2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-fft-stsb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Pearsonr: 0.8847
- Spearmanr: 0.8793
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- weight_decay: 0.01
- num_epochs: 2
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["pearsonr", "spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-FFT-STSB", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "stsb"}, "metrics": [{"type": "pearsonr", "value": 0.884685594642135, "name": "Pearsonr"}, {"type": "spearmanr", "value": 0.8793392857750939, "name": "Spearmanr"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,415 |
obss/mt5-base-3task-highlight-combined3
|
obss
|
text2text-generation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question-generation",
"answer-extraction",
"question-answering",
"text-generation",
"tr",
"dataset:tquad1",
"dataset:tquad2",
"dataset:xquad",
"arxiv:2111.06476",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-12-14T08:31:50+00:00
| 176 | 0 |
---
datasets:
- tquad1
- tquad2
- xquad
language: tr
license: cc-by-4.0
pipeline_tag: text2text-generation
tags:
- text2text-generation
- question-generation
- answer-extraction
- question-answering
- text-generation
widget:
- text: 'generate question: Legendary Entertainment, 2016 yılında bilimkurgu romanı
Dune''un <hl> film ve TV haklarını <hl> satın aldı. Geliştirme kısa bir süre sonra
başladı. Villeneuve projeye olan ilgisini dile getirdi ve resmi olarak yönetmen
olarak imza attı. Roth ve Spaihts ile birlikte çalışarak senaryoyu iki bölüme
ayırdı ve 1965 romanının 21. yüzyıla güncellenmiş bir uyarlamasını ekledi.'
example_title: Question Generation (Movie)
- text: 'generate question: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik
olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile <hl>
Türkçe Soru Üretme / Soru Cevaplama <hl> konularında yeni akademik çalışmalar
yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.'
example_title: Question Generation (Open Domain)
- text: 'generate question: Cenevizlilerin önemli üslerinden <hl> Amasra’yı <hl> aldı.
1479’da bir antlaşma yaparak Venedik''le 16 yıllık savaşa sona verdi.'
example_title: Question Generation (History)
- text: 'extract answers: Cenevizlilerin önemli üslerinden Amasra’yı aldı. <hl> 1479’da
bir antlaşma yaparak Venedik''le 16 yıllık savaşa sona verdi. <hl>'
example_title: Answer Extraction (History)
- text: 'question: Bu model ne ise yarar? context: Çalışmada sunulan yöntemle, Türkçe
metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan
kaynak kodu ile Türkçe Soru Üretme / Soru Cevaplama konularında yeni akademik
çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden
ulaşılabilir.'
example_title: Question Answering (Open Domain)
---
# mt5-base for Turkish Question Generation
Automated question generation and question answering using text-to-text transformers by OBSS AI.
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-base-3task-highlight-combined3')
```
## Citation 📜
```
@article{akyon2022questgen,
author = {Akyon, Fatih Cagatay and Cavusoglu, Ali Devrim Ekin and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin},
doi = {10.3906/elk-1300-0632.3914},
journal = {Turkish Journal of Electrical Engineering and Computer Sciences},
title = {{Automated question generation and question answering from Turkish texts}},
url = {https://journals.tubitak.gov.tr/elektrik/vol30/iss5/17/},
year = {2022}
}
```
## Overview ✔️
**Language model:** mt5-base
**Language:** Turkish
**Downstream-task:** Extractive QA/QG, Answer Extraction
**Training data:** TQuADv2-train, TQuADv2-val, XQuAD.tr
**Code:** https://github.com/obss/turkish-question-generation
**Paper:** https://journals.tubitak.gov.tr/elektrik/vol30/iss5/17/
## Hyperparameters
```
batch_size = 256
n_epochs = 15
base_LM_model = "mt5-base"
max_source_length = 512
max_target_length = 64
learning_rate = 1.0e-3
task_lisst = ["qa", "qg", "ans_ext"]
qg_format = "highlight"
```
## Performance
Refer to [paper](https://arxiv.org/abs/2111.06476).
## Usage 🔥
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-base-3task-highlight-combined3')
context = """
Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır.
Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap
üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme
/ Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir.
Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.
"""
# a) Fully Automated Question Generation
generation_api(task='question-generation', context=context)
# b) Question Answering
question = "Bu model ne işe yarar?"
generation_api(task='question-answering', context=context, question=question)
# b) Answer Extraction
generation_api(task='answer-extraction', context=context)
```
| null |
Non_BioNLP
|
# mt5-base for Turkish Question Generation
Automated question generation and question answering using text-to-text transformers by OBSS AI.
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-base-3task-highlight-combined3')
```
## Citation 📜
```
@article{akyon2022questgen,
author = {Akyon, Fatih Cagatay and Cavusoglu, Ali Devrim Ekin and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin},
doi = {10.3906/elk-1300-0632.3914},
journal = {Turkish Journal of Electrical Engineering and Computer Sciences},
title = {{Automated question generation and question answering from Turkish texts}},
url = {https://journals.tubitak.gov.tr/elektrik/vol30/iss5/17/},
year = {2022}
}
```
## Overview ✔️
**Language model:** mt5-base
**Language:** Turkish
**Downstream-task:** Extractive QA/QG, Answer Extraction
**Training data:** TQuADv2-train, TQuADv2-val, XQuAD.tr
**Code:** https://github.com/obss/turkish-question-generation
**Paper:** https://journals.tubitak.gov.tr/elektrik/vol30/iss5/17/
## Hyperparameters
```
batch_size = 256
n_epochs = 15
base_LM_model = "mt5-base"
max_source_length = 512
max_target_length = 64
learning_rate = 1.0e-3
task_lisst = ["qa", "qg", "ans_ext"]
qg_format = "highlight"
```
## Performance
Refer to [paper](https://arxiv.org/abs/2111.06476).
## Usage 🔥
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-base-3task-highlight-combined3')
context = """
Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır.
Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap
üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme
/ Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir.
Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.
"""
# a) Fully Automated Question Generation
generation_api(task='question-generation', context=context)
# b) Question Answering
question = "Bu model ne işe yarar?"
generation_api(task='question-answering', context=context, question=question)
# b) Answer Extraction
generation_api(task='answer-extraction', context=context)
```
|
{"datasets": ["tquad1", "tquad2", "xquad"], "language": "tr", "license": "cc-by-4.0", "pipeline_tag": "text2text-generation", "tags": ["text2text-generation", "question-generation", "answer-extraction", "question-answering", "text-generation"], "widget": [{"text": "generate question: Legendary Entertainment, 2016 yılında bilimkurgu romanı Dune'un <hl> film ve TV haklarını <hl> satın aldı. Geliştirme kısa bir süre sonra başladı. Villeneuve projeye olan ilgisini dile getirdi ve resmi olarak yönetmen olarak imza attı. Roth ve Spaihts ile birlikte çalışarak senaryoyu iki bölüme ayırdı ve 1965 romanının 21. yüzyıla güncellenmiş bir uyarlamasını ekledi.", "example_title": "Question Generation (Movie)"}, {"text": "generate question: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile <hl> Türkçe Soru Üretme / Soru Cevaplama <hl> konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.", "example_title": "Question Generation (Open Domain)"}, {"text": "generate question: Cenevizlilerin önemli üslerinden <hl> Amasra’yı <hl> aldı. 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi.", "example_title": "Question Generation (History)"}, {"text": "extract answers: Cenevizlilerin önemli üslerinden Amasra’yı aldı. <hl> 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi. <hl>", "example_title": "Answer Extraction (History)"}, {"text": "question: Bu model ne ise yarar? context: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme / Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.", "example_title": "Question Answering (Open Domain)"}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,416 |
muhtasham/finetuned-self_mlm_mini
|
muhtasham
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-06T22:38:26Z |
2022-12-06T22:56:50+00:00
| 112 | 0 |
---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuned-self_mlm_mini
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- type: accuracy
value: 0.8224
name: Accuracy
- type: f1
value: 0.9025460930640913
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-self_mlm_mini
This model is a fine-tuned version of [muhtasham/bert-tiny-mlm-finetuned-imdb](https://huggingface.co/muhtasham/bert-tiny-mlm-finetuned-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6150
- Accuracy: 0.8224
- F1: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4426 | 2.55 | 500 | 0.4673 | 0.7928 | 0.8844 |
| 0.2845 | 5.1 | 1000 | 0.3099 | 0.8697 | 0.9303 |
| 0.2282 | 7.65 | 1500 | 0.3432 | 0.8589 | 0.9241 |
| 0.1819 | 10.2 | 2000 | 0.2702 | 0.8998 | 0.9472 |
| 0.1461 | 12.76 | 2500 | 0.4852 | 0.8344 | 0.9097 |
| 0.111 | 15.31 | 3000 | 0.6807 | 0.7950 | 0.8858 |
| 0.0883 | 17.86 | 3500 | 0.6150 | 0.8224 | 0.9025 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-self_mlm_mini
This model is a fine-tuned version of [muhtasham/bert-tiny-mlm-finetuned-imdb](https://huggingface.co/muhtasham/bert-tiny-mlm-finetuned-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6150
- Accuracy: 0.8224
- F1: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4426 | 2.55 | 500 | 0.4673 | 0.7928 | 0.8844 |
| 0.2845 | 5.1 | 1000 | 0.3099 | 0.8697 | 0.9303 |
| 0.2282 | 7.65 | 1500 | 0.3432 | 0.8589 | 0.9241 |
| 0.1819 | 10.2 | 2000 | 0.2702 | 0.8998 | 0.9472 |
| 0.1461 | 12.76 | 2500 | 0.4852 | 0.8344 | 0.9097 |
| 0.111 | 15.31 | 3000 | 0.6807 | 0.7950 | 0.8858 |
| 0.0883 | 17.86 | 3500 | 0.6150 | 0.8224 | 0.9025 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuned-self_mlm_mini", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "train", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.8224, "name": "Accuracy"}, {"type": "f1", "value": 0.9025460930640913, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,417 |
engineerkhan/marian-finetuned-kde4-en-to-fr
|
engineerkhan
|
translation
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-28T06:17:07Z |
2023-11-28T07:27:38+00:00
| 6 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- type: bleu
value: 52.926765637094185
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8562
- Bleu: 52.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8562
- Bleu: 52.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-fr", "split": "train", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.926765637094185, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,418 |
facebook/fasttext-diq-vectors
|
facebook
|
feature-extraction
|
[
"fasttext",
"feature-extraction",
"diq",
"arxiv:1607.04606",
"arxiv:1802.06893",
"arxiv:1607.01759",
"arxiv:1612.03651",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-03-24T08:40:32Z |
2023-06-03T22:10:43+00:00
| 2 | 0 |
---
language: diq
library_name: fasttext
license: cc-by-sa-3.0
tags:
- feature-extraction
widget:
- text: apple
example_title: apple
---
# fastText (Zazaki)
fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/).
## Model description
fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes.
It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production.
## Intended uses & limitations
You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you.
### How to use
Here is how to load and use a pre-trained vectors
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-diq-vectors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.words
['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...]
>>> len(model.words)
145940
>>> model['bread']
array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01,
-1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...])
```
Here is how to use this model to query nearest neighbors of an English word vector:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.get_nearest_neighbors("bread", k=5)
[(0.5641006231307983, 'butter'),
(0.48875734210014343, 'loaf'),
(0.4491206705570221, 'eat'),
(0.42444291710853577, 'food'),
(0.4229326844215393, 'cheese')]
```
Here is how to use this model to detect the language of a given text:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.predict("Hello, world!")
(('__label__eng_Latn',), array([0.81148803]))
>>> model.predict("Hello, world!", k=5)
(('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'),
array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415]))
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.
Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1.
```python
>>> import numpy as np
>>> def cosine_similarity(word1, word2):
>>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2]))
>>> cosine_similarity("man", "boy")
0.061653383
>>> cosine_similarity("man", "ceo")
0.11989131
>>> cosine_similarity("woman", "ceo")
-0.08834904
```
## Training data
Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish.
## Training procedure
### Tokenization
We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer.
More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893).
### License
The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/).
### Evaluation datasets
The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt).
### BibTeX entry and citation info
Please cite [1] if using this code for learning word representations or [2] if using for text classification.
[1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606)
```markup
@article{bojanowski2016enriching,
title={Enriching Word Vectors with Subword Information},
author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.04606},
year={2016}
}
```
[2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759)
```markup
@article{joulin2016bag,
title={Bag of Tricks for Efficient Text Classification},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.01759},
year={2016}
}
```
[3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651)
```markup
@article{joulin2016fasttext,
title={FastText.zip: Compressing text classification models},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas},
journal={arXiv preprint arXiv:1612.03651},
year={2016}
}
```
If you use these word vectors, please cite the following paper:
[4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893)
```markup
@inproceedings{grave2018learning,
title={Learning Word Vectors for 157 Languages},
author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas},
booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
(\* These authors contributed equally.)
| null |
Non_BioNLP
|
# fastText (Zazaki)
fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/).
## Model description
fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes.
It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production.
## Intended uses & limitations
You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you.
### How to use
Here is how to load and use a pre-trained vectors
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-diq-vectors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.words
['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...]
>>> len(model.words)
145940
>>> model['bread']
array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01,
-1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...])
```
Here is how to use this model to query nearest neighbors of an English word vector:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.get_nearest_neighbors("bread", k=5)
[(0.5641006231307983, 'butter'),
(0.48875734210014343, 'loaf'),
(0.4491206705570221, 'eat'),
(0.42444291710853577, 'food'),
(0.4229326844215393, 'cheese')]
```
Here is how to use this model to detect the language of a given text:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.predict("Hello, world!")
(('__label__eng_Latn',), array([0.81148803]))
>>> model.predict("Hello, world!", k=5)
(('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'),
array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415]))
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.
Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1.
```python
>>> import numpy as np
>>> def cosine_similarity(word1, word2):
>>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2]))
>>> cosine_similarity("man", "boy")
0.061653383
>>> cosine_similarity("man", "ceo")
0.11989131
>>> cosine_similarity("woman", "ceo")
-0.08834904
```
## Training data
Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish.
## Training procedure
### Tokenization
We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer.
More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893).
### License
The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/).
### Evaluation datasets
The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt).
### BibTeX entry and citation info
Please cite [1] if using this code for learning word representations or [2] if using for text classification.
[1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606)
```markup
@article{bojanowski2016enriching,
title={Enriching Word Vectors with Subword Information},
author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.04606},
year={2016}
}
```
[2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759)
```markup
@article{joulin2016bag,
title={Bag of Tricks for Efficient Text Classification},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.01759},
year={2016}
}
```
[3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651)
```markup
@article{joulin2016fasttext,
title={FastText.zip: Compressing text classification models},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas},
journal={arXiv preprint arXiv:1612.03651},
year={2016}
}
```
If you use these word vectors, please cite the following paper:
[4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893)
```markup
@inproceedings{grave2018learning,
title={Learning Word Vectors for 157 Languages},
author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas},
booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
(\* These authors contributed equally.)
|
{"language": "diq", "library_name": "fasttext", "license": "cc-by-sa-3.0", "tags": ["feature-extraction"], "widget": [{"text": "apple", "example_title": "apple"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,419 |
vidhi0206/setfit-paraphrase-mpnet-emotionv
|
vidhi0206
|
text-classification
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | 2024-03-06T14:34:16Z |
2024-03-06T14:34:38+00:00
| 4 | 0 |
---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: i honestly thought impossible at this point i feel pretty
- text: i feel convinced that im going to shy away from whatever is really good for
me
- text: i feel guilt that i should be more caring and im not
- text: i found myself feeling nostalgic as i thought about the temporarily abandoned
little bishop chronicles
- text: i am feeling very indecisive and spontaneous
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.621
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'i don t feel so self assured i need to compete or to justify why i m so clearly not doing as well as someone else'</li><li>'i should do but i think it means that i should always be open to opportunities of inviting and involving others in ministries and that i should be creative in finding ways for others to participate in and feel welcomed into such ministries'</li><li>'i feel like im going to be way more successful a writer because of it'</li></ul> |
| 4 | <ul><li>'i feel so weird and scattered with all wonders about a million different things'</li><li>'i mean already as a parent from the moment the iolani left my body i can tell you i feel like im constantly fearful for something horrible happening to her thats out of my control'</li><li>'i think i was feeling vulnerable due to the stress of having to buy a new sewing machine and printer'</li></ul> |
| 5 | <ul><li>'i feel like this inside theres one thing i wanna know whats so funny bout peace love and understanding'</li><li>'i feel like itd be strange at the least and possibly offensive to tell a gay friend id like to experiment or something like that'</li><li>'i am not sure why in that moment that i thought i would be able to feel it hellip but it was pretty funny'</li></ul> |
| 2 | <ul><li>'i can feel that gentle rhythm imprinted on my skin i vibrates up my arm my stomach clenches my legs squeeze i forget his own leg has somehow ended up between mine'</li><li>'i feel specially fond of'</li><li>'i just feel like i dont like supporting walmart because maceys has such good family values and is closed on sundays and isnt trying to take over mom and pop stores but i have to be a smart consumer too'</li></ul> |
| 3 | <ul><li>'i am sure the vast majority of decent working class people feel insulted about being derided as unable to be respectful towards referees and are the parents who watch their child s match shouting abuse and swearing etc'</li><li>'im feeling irritated by her friggin name'</li><li>'i feel heartless now feeling bored and not believe in love anymore'</li></ul> |
| 0 | <ul><li>'i had just begun to feel like teaching was my metier but am now resigned to the fact that i likely wont teach at university ever again'</li><li>'i think the most common one that everyone has experienced is that doom and gloom feeling where you just feel like something tragic just happened'</li><li>'i feel a bit foolish now because in the last years they havent come back to my home town and i have had to travel to england to see them'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.621 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-emotionv")
# Run inference
preds = model("i am feeling very indecisive and spontaneous")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 5 | 20.4375 | 47 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
| 3 | 8 |
| 4 | 8 |
| 5 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.2804 | - |
| 0.2083 | 50 | 0.0724 | - |
| 0.4167 | 100 | 0.0512 | - |
| 0.625 | 150 | 0.0108 | - |
| 0.8333 | 200 | 0.0027 | - |
### Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'i don t feel so self assured i need to compete or to justify why i m so clearly not doing as well as someone else'</li><li>'i should do but i think it means that i should always be open to opportunities of inviting and involving others in ministries and that i should be creative in finding ways for others to participate in and feel welcomed into such ministries'</li><li>'i feel like im going to be way more successful a writer because of it'</li></ul> |
| 4 | <ul><li>'i feel so weird and scattered with all wonders about a million different things'</li><li>'i mean already as a parent from the moment the iolani left my body i can tell you i feel like im constantly fearful for something horrible happening to her thats out of my control'</li><li>'i think i was feeling vulnerable due to the stress of having to buy a new sewing machine and printer'</li></ul> |
| 5 | <ul><li>'i feel like this inside theres one thing i wanna know whats so funny bout peace love and understanding'</li><li>'i feel like itd be strange at the least and possibly offensive to tell a gay friend id like to experiment or something like that'</li><li>'i am not sure why in that moment that i thought i would be able to feel it hellip but it was pretty funny'</li></ul> |
| 2 | <ul><li>'i can feel that gentle rhythm imprinted on my skin i vibrates up my arm my stomach clenches my legs squeeze i forget his own leg has somehow ended up between mine'</li><li>'i feel specially fond of'</li><li>'i just feel like i dont like supporting walmart because maceys has such good family values and is closed on sundays and isnt trying to take over mom and pop stores but i have to be a smart consumer too'</li></ul> |
| 3 | <ul><li>'i am sure the vast majority of decent working class people feel insulted about being derided as unable to be respectful towards referees and are the parents who watch their child s match shouting abuse and swearing etc'</li><li>'im feeling irritated by her friggin name'</li><li>'i feel heartless now feeling bored and not believe in love anymore'</li></ul> |
| 0 | <ul><li>'i had just begun to feel like teaching was my metier but am now resigned to the fact that i likely wont teach at university ever again'</li><li>'i think the most common one that everyone has experienced is that doom and gloom feeling where you just feel like something tragic just happened'</li><li>'i feel a bit foolish now because in the last years they havent come back to my home town and i have had to travel to england to see them'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.621 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-emotionv")
# Run inference
preds = model("i am feeling very indecisive and spontaneous")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 5 | 20.4375 | 47 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
| 3 | 8 |
| 4 | 8 |
| 5 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.2804 | - |
| 0.2083 | 50 | 0.0724 | - |
| 0.4167 | 100 | 0.0512 | - |
| 0.625 | 150 | 0.0108 | - |
| 0.8333 | 200 | 0.0027 | - |
### Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "i honestly thought impossible at this point i feel pretty"}, {"text": "i feel convinced that im going to shy away from whatever is really good for me"}, {"text": "i feel guilt that i should be more caring and im not"}, {"text": "i found myself feeling nostalgic as i thought about the temporarily abandoned little bishop chronicles"}, {"text": "i am feeling very indecisive and spontaneous"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.621, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,420 |
TheBloke/chronos-13B-AWQ
|
TheBloke
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"pytorch",
"chatbot",
"storywriting",
"base_model:elinas/chronos-13b",
"base_model:quantized:elinas/chronos-13b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | 2023-09-19T22:17:43Z |
2023-11-09T18:18:58+00:00
| 8 | 0 |
---
base_model: elinas/chronos-13b
license: other
model_name: Chronos 13B
tags:
- llama
- pytorch
- chatbot
- storywriting
inference: false
model_creator: elinas
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronos 13B - AWQ
- Model creator: [elinas](https://huggingface.co/elinas)
- Original model: [Chronos 13B](https://huggingface.co/elinas/chronos-13b)
<!-- description start -->
## Description
This repo contains AWQ model files for [elinas's Chronos 13B](https://huggingface.co/elinas/chronos-13b).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/chronos-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/Yhyu13/chronos-13b-gptq-4bit)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/chronos-13B-GGUF)
* [elinas's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/chronos-13B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/chronos-13B-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/chronos-13B-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/chronos-13B-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: elinas's Chronos 13B
# chronos-13b
This is the fp16 PyTorch / HF version of **chronos-13b**
This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on.
This model uses Alpaca formatting, so for optimal model performance, use:
```
### Instruction:
Your instruction or question here.
### Response:
```
[4bit Quantized version](https://huggingface.co/elinas/chronos-13b-4bit)
[GGML Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-13B-GGML)
<!--**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
| null |
Non_BioNLP
|
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronos 13B - AWQ
- Model creator: [elinas](https://huggingface.co/elinas)
- Original model: [Chronos 13B](https://huggingface.co/elinas/chronos-13b)
<!-- description start -->
## Description
This repo contains AWQ model files for [elinas's Chronos 13B](https://huggingface.co/elinas/chronos-13b).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/chronos-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/Yhyu13/chronos-13b-gptq-4bit)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/chronos-13B-GGUF)
* [elinas's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/chronos-13B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/chronos-13B-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/chronos-13B-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/chronos-13B-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: elinas's Chronos 13B
# chronos-13b
This is the fp16 PyTorch / HF version of **chronos-13b**
This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on.
This model uses Alpaca formatting, so for optimal model performance, use:
```
### Instruction:
Your instruction or question here.
### Response:
```
[4bit Quantized version](https://huggingface.co/elinas/chronos-13b-4bit)
[GGML Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-13B-GGML)
<!--**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
{"base_model": "elinas/chronos-13b", "license": "other", "model_name": "Chronos 13B", "tags": ["llama", "pytorch", "chatbot", "storywriting"], "inference": false, "model_creator": "elinas", "model_type": "llama", "prompt_template": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:\n", "quantized_by": "TheBloke"}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,421 |
haoranxu/X-ALMA
|
haoranxu
| null |
[
"safetensors",
"xalma",
"en",
"da",
"nl",
"de",
"is",
"no",
"sc",
"af",
"ca",
"ro",
"gl",
"it",
"pt",
"es",
"bg",
"mk",
"sr",
"uk",
"ru",
"id",
"ms",
"th",
"vi",
"mg",
"fr",
"hu",
"el",
"cs",
"pl",
"lt",
"lv",
"ka",
"zh",
"ja",
"ko",
"fi",
"et",
"gu",
"hi",
"mr",
"ne",
"ur",
"az",
"kk",
"ky",
"tr",
"uz",
"ar",
"he",
"fa",
"dataset:oscar-corpus/OSCAR-2301",
"dataset:allenai/nllb",
"dataset:Helsinki-NLP/opus-100",
"arxiv:2410.03115",
"arxiv:2401.08417",
"base_model:haoranxu/X-ALMA-13B-Pretrain",
"base_model:finetune:haoranxu/X-ALMA-13B-Pretrain",
"license:mit",
"region:us"
] | 2024-09-30T00:49:48Z |
2024-11-18T07:08:44+00:00
| 60 | 7 |
---
base_model:
- haoranxu/X-ALMA-13B-Pretrain
datasets:
- oscar-corpus/OSCAR-2301
- allenai/nllb
- Helsinki-NLP/opus-100
language:
- en
- da
- nl
- de
- is
- 'no'
- sc
- af
- ca
- ro
- gl
- it
- pt
- es
- bg
- mk
- sr
- uk
- ru
- id
- ms
- th
- vi
- mg
- fr
- hu
- el
- cs
- pl
- lt
- lv
- ka
- zh
- ja
- ko
- fi
- et
- gu
- hi
- mr
- ne
- ur
- az
- kk
- ky
- tr
- uz
- ar
- he
- fa
license: mit
---
[X-ALMA](https://arxiv.org/pdf/2410.03115) builds upon [ALMA-R](https://arxiv.org/pdf/2401.08417) by expanding support from 6 to 50 languages. It utilizes a plug-and-play architecture with language-specific modules, complemented by a carefully designed training recipe. This release includes the **the complete X-ALMA model that contains the [X-ALMA pre-trained base model](https://huggingface.co/haoranxu/X-ALMA-13B-Pretrain) and all its language-specific modules**.
```
@misc{xu2024xalmaplugplay,
title={X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale},
author={Haoran Xu and Kenton Murray and Philipp Koehn and Hieu Hoang and Akiko Eriguchi and Huda Khayrallah},
year={2024},
eprint={2410.03115},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.03115},
}
```
X-ALMA supports 50 languages: en,da,nl,de,is,no,sv,af,ca,ro,gl,it,pt,es,bg,mk,sr,uk,ru,id,ms,th,vi,mg,fr,hu,el,cs,pl,lt,lv,ka,zh,ja,ko,fi,et,gu,hi,mr,ne,ur,az,kk,ky,tr,uz,ar,he,fa, ensuring their high-performance in translation, regardless of thir resource level.
All X-ALMA checkpoints are released at huggingface:
| Models | Model Link | Description |
|:-------------:|:---------------:|:---------------:|
| X-ALMA | [haoranxu/X-ALMA]([https://huggingface.co/haoranxu/ALMA-7B](https://huggingface.co/haoranxu/X-ALMA)) | X-ALMA model with all its modules |
| X-ALMA-13B-Pretrain | [haoranxu/X-ALMA-13B-Pretrain](https://huggingface.co/haoranxu/X-ALMA-13B-Pretrain) | X-ALMA 13B multilingual pre-trained base model |
| X-ALMA-Group1 | [haoranxu/X-ALMA-13B-Group1](https://huggingface.co/haoranxu/X-ALMA-13B-Group1) | X-ALMA group1 specific module and the merged model |
| X-ALMA-Group2 | [haoranxu/X-ALMA-13B-Group2](https://huggingface.co/haoranxu/X-ALMA-13B-Group2) | X-ALMA group2 specific module and the merged model |
| X-ALMA-Group3 | [haoranxu/X-ALMA-13B-Group3](https://huggingface.co/haoranxu/X-ALMA-13B-Group3) | X-ALMA group3 specific module and the merged model |
| X-ALMA-Group4 | [haoranxu/X-ALMA-13B-Group4](https://huggingface.co/haoranxu/X-ALMA-13B-Group4) | X-ALMA group4 specific module and the merged model |
| X-ALMA-Group5 | [haoranxu/X-ALMA-13B-Group5](https://huggingface.co/haoranxu/X-ALMA-13B-Group5) | X-ALMA group5 specific module and the merged model |
| X-ALMA-Group6 | [haoranxu/X-ALMA-13B-Group6](https://huggingface.co/haoranxu/X-ALMA-13B-Group6) | X-ALMA group6 specific module and the merged model |
| X-ALMA-Group7 | [haoranxu/X-ALMA-13B-Group7](https://huggingface.co/haoranxu/X-ALMA-13B-Group7) | X-ALMA group7 specific module and the merged model |
| X-ALMA-Group8 | [haoranxu/X-ALMA-13B-Group8](https://huggingface.co/haoranxu/X-ALMA-13B-Group8) | X-ALMA group8 specific module and the merged model |
## A quick start:
There are three ways to load X-ALMA for translation. An example of translating "我爱机器翻译。" into English (X-ALMA should also able to do multilingual open-ended QA).
**The first way**: loading the merged model where the language-specific module has been merged into the base model **(Recommended)**:
```
import torch
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
from peft import PeftModel
GROUP2LANG = {
1: ["da", "nl", "de", "is", "no", "sv", "af"],
2: ["ca", "ro", "gl", "it", "pt", "es"],
3: ["bg", "mk", "sr", "uk", "ru"],
4: ["id", "ms", "th", "vi", "mg", "fr"],
5: ["hu", "el", "cs", "pl", "lt", "lv"],
6: ["ka", "zh", "ja", "ko", "fi", "et"],
7: ["gu", "hi", "mr", "ne", "ur"],
8: ["az", "kk", "ky", "tr", "uz", "ar", "he", "fa"],
}
LANG2GROUP = {lang: str(group) for group, langs in GROUP2LANG.items() for lang in langs}
group_id = LANG2GROUP["zh"]
model = AutoModelForCausalLM.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
# Add the source sentence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
# X-ALMA needs chat template but ALMA and ALMA-R don't need it.
chat_style_prompt = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(chat_style_prompt, tokenize=False, add_generation_prompt=True)
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
**The second way**: loading the base model and language-specific module **(Recommended)**:
```
model = AutoModelForCausalLM.from_pretrained("haoranxu/X-ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, f"haoranxu/X-ALMA-13B-Group{group_id}")
tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
```
**The third way**: loading the base model with all language-specific modules like MoE: (Require large GPU memory)
```
from modeling_xalma import XALMAForCausalLM
model = XALMAForCausalLM.from_pretrained("haoranxu/X-ALMA", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("haoranxu/X-ALMA", padding_side='left')
# Add `lang="zh"`: specify the language to instruct the model on which group to use for the third loading method during generation.
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9, lang="zh")
```
| null |
Non_BioNLP
|
[X-ALMA](https://arxiv.org/pdf/2410.03115) builds upon [ALMA-R](https://arxiv.org/pdf/2401.08417) by expanding support from 6 to 50 languages. It utilizes a plug-and-play architecture with language-specific modules, complemented by a carefully designed training recipe. This release includes the **the complete X-ALMA model that contains the [X-ALMA pre-trained base model](https://huggingface.co/haoranxu/X-ALMA-13B-Pretrain) and all its language-specific modules**.
```
@misc{xu2024xalmaplugplay,
title={X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale},
author={Haoran Xu and Kenton Murray and Philipp Koehn and Hieu Hoang and Akiko Eriguchi and Huda Khayrallah},
year={2024},
eprint={2410.03115},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.03115},
}
```
X-ALMA supports 50 languages: en,da,nl,de,is,no,sv,af,ca,ro,gl,it,pt,es,bg,mk,sr,uk,ru,id,ms,th,vi,mg,fr,hu,el,cs,pl,lt,lv,ka,zh,ja,ko,fi,et,gu,hi,mr,ne,ur,az,kk,ky,tr,uz,ar,he,fa, ensuring their high-performance in translation, regardless of thir resource level.
All X-ALMA checkpoints are released at huggingface:
| Models | Model Link | Description |
|:-------------:|:---------------:|:---------------:|
| X-ALMA | [haoranxu/X-ALMA]([https://huggingface.co/haoranxu/ALMA-7B](https://huggingface.co/haoranxu/X-ALMA)) | X-ALMA model with all its modules |
| X-ALMA-13B-Pretrain | [haoranxu/X-ALMA-13B-Pretrain](https://huggingface.co/haoranxu/X-ALMA-13B-Pretrain) | X-ALMA 13B multilingual pre-trained base model |
| X-ALMA-Group1 | [haoranxu/X-ALMA-13B-Group1](https://huggingface.co/haoranxu/X-ALMA-13B-Group1) | X-ALMA group1 specific module and the merged model |
| X-ALMA-Group2 | [haoranxu/X-ALMA-13B-Group2](https://huggingface.co/haoranxu/X-ALMA-13B-Group2) | X-ALMA group2 specific module and the merged model |
| X-ALMA-Group3 | [haoranxu/X-ALMA-13B-Group3](https://huggingface.co/haoranxu/X-ALMA-13B-Group3) | X-ALMA group3 specific module and the merged model |
| X-ALMA-Group4 | [haoranxu/X-ALMA-13B-Group4](https://huggingface.co/haoranxu/X-ALMA-13B-Group4) | X-ALMA group4 specific module and the merged model |
| X-ALMA-Group5 | [haoranxu/X-ALMA-13B-Group5](https://huggingface.co/haoranxu/X-ALMA-13B-Group5) | X-ALMA group5 specific module and the merged model |
| X-ALMA-Group6 | [haoranxu/X-ALMA-13B-Group6](https://huggingface.co/haoranxu/X-ALMA-13B-Group6) | X-ALMA group6 specific module and the merged model |
| X-ALMA-Group7 | [haoranxu/X-ALMA-13B-Group7](https://huggingface.co/haoranxu/X-ALMA-13B-Group7) | X-ALMA group7 specific module and the merged model |
| X-ALMA-Group8 | [haoranxu/X-ALMA-13B-Group8](https://huggingface.co/haoranxu/X-ALMA-13B-Group8) | X-ALMA group8 specific module and the merged model |
## A quick start:
There are three ways to load X-ALMA for translation. An example of translating "我爱机器翻译。" into English (X-ALMA should also able to do multilingual open-ended QA).
**The first way**: loading the merged model where the language-specific module has been merged into the base model **(Recommended)**:
```
import torch
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
from peft import PeftModel
GROUP2LANG = {
1: ["da", "nl", "de", "is", "no", "sv", "af"],
2: ["ca", "ro", "gl", "it", "pt", "es"],
3: ["bg", "mk", "sr", "uk", "ru"],
4: ["id", "ms", "th", "vi", "mg", "fr"],
5: ["hu", "el", "cs", "pl", "lt", "lv"],
6: ["ka", "zh", "ja", "ko", "fi", "et"],
7: ["gu", "hi", "mr", "ne", "ur"],
8: ["az", "kk", "ky", "tr", "uz", "ar", "he", "fa"],
}
LANG2GROUP = {lang: str(group) for group, langs in GROUP2LANG.items() for lang in langs}
group_id = LANG2GROUP["zh"]
model = AutoModelForCausalLM.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
# Add the source sentence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
# X-ALMA needs chat template but ALMA and ALMA-R don't need it.
chat_style_prompt = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(chat_style_prompt, tokenize=False, add_generation_prompt=True)
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
**The second way**: loading the base model and language-specific module **(Recommended)**:
```
model = AutoModelForCausalLM.from_pretrained("haoranxu/X-ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, f"haoranxu/X-ALMA-13B-Group{group_id}")
tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
```
**The third way**: loading the base model with all language-specific modules like MoE: (Require large GPU memory)
```
from modeling_xalma import XALMAForCausalLM
model = XALMAForCausalLM.from_pretrained("haoranxu/X-ALMA", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("haoranxu/X-ALMA", padding_side='left')
# Add `lang="zh"`: specify the language to instruct the model on which group to use for the third loading method during generation.
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9, lang="zh")
```
|
{"base_model": ["haoranxu/X-ALMA-13B-Pretrain"], "datasets": ["oscar-corpus/OSCAR-2301", "allenai/nllb", "Helsinki-NLP/opus-100"], "language": ["en", "da", "nl", "de", "is", "no", "sc", "af", "ca", "ro", "gl", "it", "pt", "es", "bg", "mk", "sr", "uk", "ru", "id", "ms", "th", "vi", "mg", "fr", "hu", "el", "cs", "pl", "lt", "lv", "ka", "zh", "ja", "ko", "fi", "et", "gu", "hi", "mr", "ne", "ur", "az", "kk", "ky", "tr", "uz", "ar", "he", "fa"], "license": "mit"}
|
task
|
[
"TRANSLATION"
] | 45,422 |
coffeepowered/arctic-l-enhanced
|
coffeepowered
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:Snowflake/snowflake-arctic-embed-l-v2.0",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l-v2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-05T10:01:12Z |
2025-03-05T10:02:32+00:00
| 62 | 0 |
---
base_model: Snowflake/snowflake-arctic-embed-l-v2.0
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- loss:CosineSimilarityLoss
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l-v2.0
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) <!-- at revision 7f311bb640ad3babc0a4e3a8873240dcba44c9d2 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("coffeepowered/arctic-l-enhanced")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Columns: <code>sentence1</code>, <code>sentence2</code>, <code>score</code>, and <code>split</code>
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Columns: <code>sentence1</code>, <code>sentence2</code>, <code>score</code>, and <code>split</code>
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `learning_rate`: 4.000000000000001e-06
- `max_steps`: 13938
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4.000000000000001e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: 13938
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:--------:|:-------------:|:---------------:|
| 0.0007 | 10 | 0.1359 | - |
| 0.0014 | 20 | 0.1202 | - |
| 0.0022 | 30 | 0.1314 | - |
| 0.0029 | 40 | 0.1302 | - |
| 0.0036 | 50 | 0.1158 | - |
| 0.0043 | 60 | 0.1158 | - |
| 0.0050 | 70 | 0.1114 | - |
| 0.0057 | 80 | 0.1316 | - |
| 0.0065 | 90 | 0.133 | - |
| 0.0072 | 100 | 0.0958 | - |
| 0.0079 | 110 | 0.0973 | - |
| 0.0086 | 120 | 0.0949 | - |
| 0.0093 | 130 | 0.0892 | - |
| 0.0100 | 140 | 0.0867 | - |
| 0.0108 | 150 | 0.0717 | 0.0712 |
| 0.0115 | 160 | 0.0762 | - |
| 0.0122 | 170 | 0.0828 | - |
| 0.0129 | 180 | 0.0775 | - |
| 0.0136 | 190 | 0.0614 | - |
| 0.0143 | 200 | 0.0748 | - |
| 0.0151 | 210 | 0.0545 | - |
| 0.0158 | 220 | 0.0725 | - |
| 0.0165 | 230 | 0.0627 | - |
| 0.0172 | 240 | 0.0612 | - |
| 0.0179 | 250 | 0.0508 | - |
| 0.0187 | 260 | 0.0592 | - |
| 0.0194 | 270 | 0.0489 | - |
| 0.0201 | 280 | 0.0545 | - |
| 0.0208 | 290 | 0.0598 | - |
| 0.0215 | 300 | 0.0641 | 0.0566 |
| 0.0222 | 310 | 0.0554 | - |
| 0.0230 | 320 | 0.0523 | - |
| 0.0237 | 330 | 0.0568 | - |
| 0.0244 | 340 | 0.0594 | - |
| 0.0251 | 350 | 0.0488 | - |
| 0.0258 | 360 | 0.06 | - |
| 0.0265 | 370 | 0.0737 | - |
| 0.0273 | 380 | 0.0565 | - |
| 0.0280 | 390 | 0.0456 | - |
| 0.0287 | 400 | 0.0489 | - |
| 0.0294 | 410 | 0.0478 | - |
| 0.0301 | 420 | 0.0481 | - |
| 0.0309 | 430 | 0.0541 | - |
| 0.0316 | 440 | 0.0581 | - |
| 0.0323 | 450 | 0.0549 | 0.0578 |
| 0.0330 | 460 | 0.042 | - |
| 0.0337 | 470 | 0.0451 | - |
| 0.0344 | 480 | 0.0537 | - |
| 0.0352 | 490 | 0.0487 | - |
| 0.0359 | 500 | 0.0444 | - |
| 0.0366 | 510 | 0.0443 | - |
| 0.0373 | 520 | 0.0418 | - |
| 0.0380 | 530 | 0.0447 | - |
| 0.0387 | 540 | 0.0453 | - |
| 0.0395 | 550 | 0.0465 | - |
| 0.0402 | 560 | 0.0623 | - |
| 0.0409 | 570 | 0.0507 | - |
| 0.0416 | 580 | 0.0428 | - |
| 0.0423 | 590 | 0.0494 | - |
| 0.0430 | 600 | 0.0407 | 0.0519 |
| 0.0438 | 610 | 0.0452 | - |
| 0.0445 | 620 | 0.0439 | - |
| 0.0452 | 630 | 0.0534 | - |
| 0.0459 | 640 | 0.0422 | - |
| 0.0466 | 650 | 0.0514 | - |
| 0.0474 | 660 | 0.0406 | - |
| 0.0481 | 670 | 0.0428 | - |
| 0.0488 | 680 | 0.0346 | - |
| 0.0495 | 690 | 0.0485 | - |
| 0.0502 | 700 | 0.0549 | - |
| 0.0509 | 710 | 0.0399 | - |
| 0.0517 | 720 | 0.0362 | - |
| 0.0524 | 730 | 0.0507 | - |
| 0.0531 | 740 | 0.0434 | - |
| 0.0538 | 750 | 0.0363 | 0.0553 |
| 0.0545 | 760 | 0.0426 | - |
| 0.0552 | 770 | 0.0375 | - |
| 0.0560 | 780 | 0.0501 | - |
| 0.0567 | 790 | 0.043 | - |
| 0.0574 | 800 | 0.0476 | - |
| 0.0581 | 810 | 0.037 | - |
| 0.0588 | 820 | 0.0317 | - |
| 0.0595 | 830 | 0.0387 | - |
| 0.0603 | 840 | 0.0348 | - |
| 0.0610 | 850 | 0.0379 | - |
| 0.0617 | 860 | 0.0506 | - |
| 0.0624 | 870 | 0.0419 | - |
| 0.0631 | 880 | 0.0431 | - |
| 0.0639 | 890 | 0.0478 | - |
| 0.0646 | 900 | 0.0394 | 0.0571 |
| 0.0653 | 910 | 0.0334 | - |
| 0.0660 | 920 | 0.0384 | - |
| 0.0667 | 930 | 0.0381 | - |
| 0.0674 | 940 | 0.0343 | - |
| 0.0682 | 950 | 0.0355 | - |
| 0.0689 | 960 | 0.0384 | - |
| 0.0696 | 970 | 0.0352 | - |
| 0.0703 | 980 | 0.0479 | - |
| 0.0710 | 990 | 0.0362 | - |
| 0.0717 | 1000 | 0.0357 | - |
| 0.0725 | 1010 | 0.0414 | - |
| 0.0732 | 1020 | 0.0346 | - |
| 0.0739 | 1030 | 0.039 | - |
| 0.0746 | 1040 | 0.0317 | - |
| 0.0753 | 1050 | 0.0502 | 0.0523 |
| 0.0761 | 1060 | 0.0407 | - |
| 0.0768 | 1070 | 0.0377 | - |
| 0.0775 | 1080 | 0.0404 | - |
| 0.0782 | 1090 | 0.0297 | - |
| 0.0789 | 1100 | 0.0344 | - |
| 0.0796 | 1110 | 0.0277 | - |
| 0.0804 | 1120 | 0.0453 | - |
| 0.0811 | 1130 | 0.0469 | - |
| 0.0818 | 1140 | 0.044 | - |
| 0.0825 | 1150 | 0.0339 | - |
| 0.0832 | 1160 | 0.0331 | - |
| 0.0839 | 1170 | 0.0327 | - |
| 0.0847 | 1180 | 0.0445 | - |
| 0.0854 | 1190 | 0.0392 | - |
| 0.0861 | 1200 | 0.048 | 0.0531 |
| 0.0868 | 1210 | 0.0375 | - |
| 0.0875 | 1220 | 0.0311 | - |
| 0.0882 | 1230 | 0.037 | - |
| 0.0890 | 1240 | 0.0369 | - |
| 0.0897 | 1250 | 0.0398 | - |
| 0.0904 | 1260 | 0.0363 | - |
| 0.0911 | 1270 | 0.0425 | - |
| 0.0918 | 1280 | 0.0355 | - |
| 0.0926 | 1290 | 0.0362 | - |
| 0.0933 | 1300 | 0.0354 | - |
| 0.0940 | 1310 | 0.0252 | - |
| 0.0947 | 1320 | 0.03 | - |
| 0.0954 | 1330 | 0.03 | - |
| 0.0961 | 1340 | 0.0218 | - |
| 0.0969 | 1350 | 0.0248 | 0.0448 |
| 0.0976 | 1360 | 0.0287 | - |
| 0.0983 | 1370 | 0.0322 | - |
| 0.0990 | 1380 | 0.0359 | - |
| 0.0997 | 1390 | 0.0287 | - |
| 0.1004 | 1400 | 0.034 | - |
| 0.1012 | 1410 | 0.0305 | - |
| 0.1019 | 1420 | 0.0241 | - |
| 0.1026 | 1430 | 0.0227 | - |
| 0.1033 | 1440 | 0.0279 | - |
| 0.1040 | 1450 | 0.0298 | - |
| 0.1047 | 1460 | 0.0382 | - |
| 0.1055 | 1470 | 0.0269 | - |
| 0.1062 | 1480 | 0.0243 | - |
| 0.1069 | 1490 | 0.0266 | - |
| 0.1076 | 1500 | 0.016 | 0.0456 |
| 0.1083 | 1510 | 0.0233 | - |
| 0.1091 | 1520 | 0.0247 | - |
| 0.1098 | 1530 | 0.0233 | - |
| 0.1105 | 1540 | 0.0214 | - |
| 0.1112 | 1550 | 0.0229 | - |
| 0.1119 | 1560 | 0.0145 | - |
| 0.1126 | 1570 | 0.0187 | - |
| 0.1134 | 1580 | 0.0231 | - |
| 0.1141 | 1590 | 0.0268 | - |
| 0.1148 | 1600 | 0.027 | - |
| 0.1155 | 1610 | 0.0354 | - |
| 0.1162 | 1620 | 0.0398 | - |
| 0.1169 | 1630 | 0.0431 | - |
| 0.1177 | 1640 | 0.0391 | - |
| 0.1184 | 1650 | 0.056 | 0.0422 |
| 0.1191 | 1660 | 0.0394 | - |
| 0.1198 | 1670 | 0.0338 | - |
| 0.1205 | 1680 | 0.0417 | - |
| 0.1213 | 1690 | 0.0479 | - |
| 0.1220 | 1700 | 0.0348 | - |
| 0.1227 | 1710 | 0.0366 | - |
| 0.1234 | 1720 | 0.0335 | - |
| 0.1241 | 1730 | 0.0367 | - |
| 0.1248 | 1740 | 0.0355 | - |
| 0.1256 | 1750 | 0.0319 | - |
| 0.1263 | 1760 | 0.0324 | - |
| 0.1270 | 1770 | 0.0345 | - |
| 0.1277 | 1780 | 0.0352 | - |
| 0.1284 | 1790 | 0.0362 | - |
| 0.1291 | 1800 | 0.0319 | 0.0388 |
| 0.1299 | 1810 | 0.0381 | - |
| 0.1306 | 1820 | 0.0368 | - |
| 0.1313 | 1830 | 0.0318 | - |
| 0.1320 | 1840 | 0.0296 | - |
| 0.1327 | 1850 | 0.0331 | - |
| 0.1334 | 1860 | 0.0381 | - |
| 0.1342 | 1870 | 0.0237 | - |
| 0.1349 | 1880 | 0.0284 | - |
| 0.1356 | 1890 | 0.0353 | - |
| 0.1363 | 1900 | 0.0408 | - |
| 0.1370 | 1910 | 0.0277 | - |
| 0.1378 | 1920 | 0.0268 | - |
| 0.1385 | 1930 | 0.0348 | - |
| 0.1392 | 1940 | 0.0348 | - |
| 0.1399 | 1950 | 0.0269 | 0.0351 |
| 0.1406 | 1960 | 0.0317 | - |
| 0.1413 | 1970 | 0.0375 | - |
| 0.1421 | 1980 | 0.0321 | - |
| 0.1428 | 1990 | 0.0302 | - |
| 0.1435 | 2000 | 0.0411 | - |
| 0.1442 | 2010 | 0.0329 | - |
| 0.1449 | 2020 | 0.0246 | - |
| 0.1456 | 2030 | 0.0288 | - |
| 0.1464 | 2040 | 0.0297 | - |
| 0.1471 | 2050 | 0.0285 | - |
| 0.1478 | 2060 | 0.0326 | - |
| 0.1485 | 2070 | 0.035 | - |
| 0.1492 | 2080 | 0.0208 | - |
| 0.1499 | 2090 | 0.0209 | - |
| 0.1507 | 2100 | 0.0303 | 0.0364 |
| 0.1514 | 2110 | 0.028 | - |
| 0.1521 | 2120 | 0.0366 | - |
| 0.1528 | 2130 | 0.033 | - |
| 0.1535 | 2140 | 0.0353 | - |
| 0.1543 | 2150 | 0.0292 | - |
| 0.1550 | 2160 | 0.0202 | - |
| 0.1557 | 2170 | 0.0233 | - |
| 0.1564 | 2180 | 0.0346 | - |
| 0.1571 | 2190 | 0.0309 | - |
| 0.1578 | 2200 | 0.0241 | - |
| 0.1586 | 2210 | 0.0317 | - |
| 0.1593 | 2220 | 0.0331 | - |
| 0.1600 | 2230 | 0.0273 | - |
| 0.1607 | 2240 | 0.0219 | - |
| 0.1614 | 2250 | 0.0256 | 0.0338 |
| 0.1621 | 2260 | 0.024 | - |
| 0.1629 | 2270 | 0.0216 | - |
| 0.1636 | 2280 | 0.0253 | - |
| 0.1643 | 2290 | 0.0271 | - |
| 0.1650 | 2300 | 0.0336 | - |
| 0.1657 | 2310 | 0.0273 | - |
| 0.1665 | 2320 | 0.0267 | - |
| 0.1672 | 2330 | 0.0292 | - |
| 0.1679 | 2340 | 0.0269 | - |
| 0.1686 | 2350 | 0.0342 | - |
| 0.1693 | 2360 | 0.0361 | - |
| 0.1700 | 2370 | 0.0224 | - |
| 0.1708 | 2380 | 0.016 | - |
| 0.1715 | 2390 | 0.0209 | - |
| **0.1722** | **2400** | **0.0162** | **0.0329** |
| 0.1729 | 2410 | 0.019 | - |
| 0.1736 | 2420 | 0.0181 | - |
| 0.1743 | 2430 | 0.0332 | - |
| 0.1751 | 2440 | 0.0189 | - |
| 0.1758 | 2450 | 0.0292 | - |
| 0.1765 | 2460 | 0.0193 | - |
| 0.1772 | 2470 | 0.024 | - |
| 0.1779 | 2480 | 0.0245 | - |
| 0.1786 | 2490 | 0.02 | - |
| 0.1794 | 2500 | 0.0187 | - |
| 0.1801 | 2510 | 0.0224 | - |
| 0.1808 | 2520 | 0.028 | - |
| 0.1815 | 2530 | 0.0219 | - |
| 0.1822 | 2540 | 0.0212 | - |
| 0.1830 | 2550 | 0.0208 | 0.0354 |
| 0.1837 | 2560 | 0.0299 | - |
| 0.1844 | 2570 | 0.0208 | - |
| 0.1851 | 2580 | 0.0271 | - |
| 0.1858 | 2590 | 0.0146 | - |
| 0.1865 | 2600 | 0.0163 | - |
| 0.1873 | 2610 | 0.0149 | - |
| 0.1880 | 2620 | 0.0145 | - |
| 0.1887 | 2630 | 0.0196 | - |
| 0.1894 | 2640 | 0.0252 | - |
| 0.1901 | 2650 | 0.0173 | - |
| 0.1908 | 2660 | 0.0202 | - |
| 0.1916 | 2670 | 0.0171 | - |
| 0.1923 | 2680 | 0.023 | - |
| 0.1930 | 2690 | 0.02 | - |
| 0.1937 | 2700 | 0.0171 | 0.0387 |
| 0.1944 | 2710 | 0.0239 | - |
| 0.1951 | 2720 | 0.0261 | - |
| 0.1959 | 2730 | 0.0197 | - |
| 0.1966 | 2740 | 0.0195 | - |
| 0.1973 | 2750 | 0.0132 | - |
| 0.1980 | 2760 | 0.0226 | - |
| 0.1987 | 2770 | 0.0172 | - |
| 0.1995 | 2780 | 0.0214 | - |
| 0.2002 | 2790 | 0.0173 | - |
| 0.2009 | 2800 | 0.0179 | - |
| 0.2016 | 2810 | 0.0184 | - |
| 0.2023 | 2820 | 0.0162 | - |
| 0.2030 | 2830 | 0.0126 | - |
| 0.2038 | 2840 | 0.0188 | - |
| 0.2045 | 2850 | 0.0177 | 0.0385 |
| 0.2052 | 2860 | 0.0153 | - |
| 0.2059 | 2870 | 0.0137 | - |
| 0.2066 | 2880 | 0.0174 | - |
| 0.2073 | 2890 | 0.0148 | - |
| 0.2081 | 2900 | 0.0194 | - |
| 0.2088 | 2910 | 0.0133 | - |
| 0.2095 | 2920 | 0.0152 | - |
| 0.2102 | 2930 | 0.0179 | - |
| 0.2109 | 2940 | 0.0187 | - |
| 0.2117 | 2950 | 0.0161 | - |
| 0.2124 | 2960 | 0.0185 | - |
| 0.2131 | 2970 | 0.0151 | - |
| 0.2138 | 2980 | 0.0194 | - |
| 0.2145 | 2990 | 0.0179 | - |
| 0.2152 | 3000 | 0.0115 | 0.0434 |
| 0.2160 | 3010 | 0.0183 | - |
| 0.2167 | 3020 | 0.0168 | - |
| 0.2174 | 3030 | 0.0137 | - |
| 0.2181 | 3040 | 0.0196 | - |
| 0.2188 | 3050 | 0.0145 | - |
| 0.2195 | 3060 | 0.0164 | - |
| 0.2203 | 3070 | 0.0132 | - |
| 0.2210 | 3080 | 0.0133 | - |
| 0.2217 | 3090 | 0.0154 | - |
| 0.2224 | 3100 | 0.0145 | - |
| 0.2231 | 3110 | 0.0135 | - |
| 0.2238 | 3120 | 0.0143 | - |
| 0.2246 | 3130 | 0.0197 | - |
| 0.2253 | 3140 | 0.0163 | - |
| 0.2260 | 3150 | 0.017 | 0.0409 |
| 0.2267 | 3160 | 0.0257 | - |
| 0.2274 | 3170 | 0.0165 | - |
| 0.2282 | 3180 | 0.0189 | - |
| 0.2289 | 3190 | 0.0207 | - |
| 0.2296 | 3200 | 0.0179 | - |
| 0.2303 | 3210 | 0.0152 | - |
| 0.2310 | 3220 | 0.0179 | - |
| 0.2317 | 3230 | 0.0187 | - |
| 0.2325 | 3240 | 0.0189 | - |
| 0.2332 | 3250 | 0.0151 | - |
| 0.2339 | 3260 | 0.0137 | - |
| 0.2346 | 3270 | 0.0122 | - |
| 0.2353 | 3280 | 0.0134 | - |
| 0.2360 | 3290 | 0.0134 | - |
| 0.2368 | 3300 | 0.0162 | 0.0455 |
| 0.2375 | 3310 | 0.0142 | - |
| 0.2382 | 3320 | 0.0203 | - |
| 0.2389 | 3330 | 0.0148 | - |
| 0.2396 | 3340 | 0.0181 | - |
| 0.2404 | 3350 | 0.0178 | - |
| 0.2411 | 3360 | 0.018 | - |
| 0.2418 | 3370 | 0.012 | - |
| 0.2425 | 3380 | 0.0201 | - |
| 0.2432 | 3390 | 0.0193 | - |
| 0.2439 | 3400 | 0.013 | - |
| 0.2447 | 3410 | 0.0114 | - |
| 0.2454 | 3420 | 0.0184 | - |
| 0.2461 | 3430 | 0.0153 | - |
| 0.2468 | 3440 | 0.0172 | - |
| 0.2475 | 3450 | 0.0141 | 0.0458 |
| 0.2482 | 3460 | 0.0165 | - |
| 0.2490 | 3470 | 0.0174 | - |
| 0.2497 | 3480 | 0.0109 | - |
| 0.2504 | 3490 | 0.0177 | - |
| 0.2511 | 3500 | 0.0178 | - |
| 0.2518 | 3510 | 0.0101 | - |
| 0.2525 | 3520 | 0.0154 | - |
| 0.2533 | 3530 | 0.0204 | - |
| 0.2540 | 3540 | 0.0119 | - |
| 0.2547 | 3550 | 0.0158 | - |
| 0.2554 | 3560 | 0.0161 | - |
| 0.2561 | 3570 | 0.0116 | - |
| 0.2569 | 3580 | 0.0183 | - |
| 0.2576 | 3590 | 0.0125 | - |
| 0.2583 | 3600 | 0.0148 | 0.0475 |
| 0.2590 | 3610 | 0.0184 | - |
| 0.2597 | 3620 | 0.0169 | - |
| 0.2604 | 3630 | 0.0135 | - |
| 0.2612 | 3640 | 0.0138 | - |
| 0.2619 | 3650 | 0.0133 | - |
| 0.2626 | 3660 | 0.0148 | - |
| 0.2633 | 3670 | 0.0121 | - |
| 0.2640 | 3680 | 0.0119 | - |
| 0.2647 | 3690 | 0.0183 | - |
| 0.2655 | 3700 | 0.0178 | - |
| 0.2662 | 3710 | 0.0139 | - |
| 0.2669 | 3720 | 0.0171 | - |
| 0.2676 | 3730 | 0.0155 | - |
| 0.2683 | 3740 | 0.0173 | - |
| 0.2690 | 3750 | 0.0158 | 0.0488 |
| 0.2698 | 3760 | 0.0125 | - |
| 0.2705 | 3770 | 0.0137 | - |
| 0.2712 | 3780 | 0.0184 | - |
| 0.2719 | 3790 | 0.0117 | - |
| 0.2726 | 3800 | 0.0189 | - |
| 0.2734 | 3810 | 0.0163 | - |
| 0.2741 | 3820 | 0.0163 | - |
| 0.2748 | 3830 | 0.0212 | - |
| 0.2755 | 3840 | 0.0133 | - |
| 0.2762 | 3850 | 0.0168 | - |
| 0.2769 | 3860 | 0.0197 | - |
| 0.2777 | 3870 | 0.0184 | - |
| 0.2784 | 3880 | 0.017 | - |
| 0.2791 | 3890 | 0.0173 | - |
| 0.2798 | 3900 | 0.0165 | 0.0511 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.2
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l-v2.0
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) <!-- at revision 7f311bb640ad3babc0a4e3a8873240dcba44c9d2 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("coffeepowered/arctic-l-enhanced")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Columns: <code>sentence1</code>, <code>sentence2</code>, <code>score</code>, and <code>split</code>
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Columns: <code>sentence1</code>, <code>sentence2</code>, <code>score</code>, and <code>split</code>
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `learning_rate`: 4.000000000000001e-06
- `max_steps`: 13938
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4.000000000000001e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: 13938
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:--------:|:-------------:|:---------------:|
| 0.0007 | 10 | 0.1359 | - |
| 0.0014 | 20 | 0.1202 | - |
| 0.0022 | 30 | 0.1314 | - |
| 0.0029 | 40 | 0.1302 | - |
| 0.0036 | 50 | 0.1158 | - |
| 0.0043 | 60 | 0.1158 | - |
| 0.0050 | 70 | 0.1114 | - |
| 0.0057 | 80 | 0.1316 | - |
| 0.0065 | 90 | 0.133 | - |
| 0.0072 | 100 | 0.0958 | - |
| 0.0079 | 110 | 0.0973 | - |
| 0.0086 | 120 | 0.0949 | - |
| 0.0093 | 130 | 0.0892 | - |
| 0.0100 | 140 | 0.0867 | - |
| 0.0108 | 150 | 0.0717 | 0.0712 |
| 0.0115 | 160 | 0.0762 | - |
| 0.0122 | 170 | 0.0828 | - |
| 0.0129 | 180 | 0.0775 | - |
| 0.0136 | 190 | 0.0614 | - |
| 0.0143 | 200 | 0.0748 | - |
| 0.0151 | 210 | 0.0545 | - |
| 0.0158 | 220 | 0.0725 | - |
| 0.0165 | 230 | 0.0627 | - |
| 0.0172 | 240 | 0.0612 | - |
| 0.0179 | 250 | 0.0508 | - |
| 0.0187 | 260 | 0.0592 | - |
| 0.0194 | 270 | 0.0489 | - |
| 0.0201 | 280 | 0.0545 | - |
| 0.0208 | 290 | 0.0598 | - |
| 0.0215 | 300 | 0.0641 | 0.0566 |
| 0.0222 | 310 | 0.0554 | - |
| 0.0230 | 320 | 0.0523 | - |
| 0.0237 | 330 | 0.0568 | - |
| 0.0244 | 340 | 0.0594 | - |
| 0.0251 | 350 | 0.0488 | - |
| 0.0258 | 360 | 0.06 | - |
| 0.0265 | 370 | 0.0737 | - |
| 0.0273 | 380 | 0.0565 | - |
| 0.0280 | 390 | 0.0456 | - |
| 0.0287 | 400 | 0.0489 | - |
| 0.0294 | 410 | 0.0478 | - |
| 0.0301 | 420 | 0.0481 | - |
| 0.0309 | 430 | 0.0541 | - |
| 0.0316 | 440 | 0.0581 | - |
| 0.0323 | 450 | 0.0549 | 0.0578 |
| 0.0330 | 460 | 0.042 | - |
| 0.0337 | 470 | 0.0451 | - |
| 0.0344 | 480 | 0.0537 | - |
| 0.0352 | 490 | 0.0487 | - |
| 0.0359 | 500 | 0.0444 | - |
| 0.0366 | 510 | 0.0443 | - |
| 0.0373 | 520 | 0.0418 | - |
| 0.0380 | 530 | 0.0447 | - |
| 0.0387 | 540 | 0.0453 | - |
| 0.0395 | 550 | 0.0465 | - |
| 0.0402 | 560 | 0.0623 | - |
| 0.0409 | 570 | 0.0507 | - |
| 0.0416 | 580 | 0.0428 | - |
| 0.0423 | 590 | 0.0494 | - |
| 0.0430 | 600 | 0.0407 | 0.0519 |
| 0.0438 | 610 | 0.0452 | - |
| 0.0445 | 620 | 0.0439 | - |
| 0.0452 | 630 | 0.0534 | - |
| 0.0459 | 640 | 0.0422 | - |
| 0.0466 | 650 | 0.0514 | - |
| 0.0474 | 660 | 0.0406 | - |
| 0.0481 | 670 | 0.0428 | - |
| 0.0488 | 680 | 0.0346 | - |
| 0.0495 | 690 | 0.0485 | - |
| 0.0502 | 700 | 0.0549 | - |
| 0.0509 | 710 | 0.0399 | - |
| 0.0517 | 720 | 0.0362 | - |
| 0.0524 | 730 | 0.0507 | - |
| 0.0531 | 740 | 0.0434 | - |
| 0.0538 | 750 | 0.0363 | 0.0553 |
| 0.0545 | 760 | 0.0426 | - |
| 0.0552 | 770 | 0.0375 | - |
| 0.0560 | 780 | 0.0501 | - |
| 0.0567 | 790 | 0.043 | - |
| 0.0574 | 800 | 0.0476 | - |
| 0.0581 | 810 | 0.037 | - |
| 0.0588 | 820 | 0.0317 | - |
| 0.0595 | 830 | 0.0387 | - |
| 0.0603 | 840 | 0.0348 | - |
| 0.0610 | 850 | 0.0379 | - |
| 0.0617 | 860 | 0.0506 | - |
| 0.0624 | 870 | 0.0419 | - |
| 0.0631 | 880 | 0.0431 | - |
| 0.0639 | 890 | 0.0478 | - |
| 0.0646 | 900 | 0.0394 | 0.0571 |
| 0.0653 | 910 | 0.0334 | - |
| 0.0660 | 920 | 0.0384 | - |
| 0.0667 | 930 | 0.0381 | - |
| 0.0674 | 940 | 0.0343 | - |
| 0.0682 | 950 | 0.0355 | - |
| 0.0689 | 960 | 0.0384 | - |
| 0.0696 | 970 | 0.0352 | - |
| 0.0703 | 980 | 0.0479 | - |
| 0.0710 | 990 | 0.0362 | - |
| 0.0717 | 1000 | 0.0357 | - |
| 0.0725 | 1010 | 0.0414 | - |
| 0.0732 | 1020 | 0.0346 | - |
| 0.0739 | 1030 | 0.039 | - |
| 0.0746 | 1040 | 0.0317 | - |
| 0.0753 | 1050 | 0.0502 | 0.0523 |
| 0.0761 | 1060 | 0.0407 | - |
| 0.0768 | 1070 | 0.0377 | - |
| 0.0775 | 1080 | 0.0404 | - |
| 0.0782 | 1090 | 0.0297 | - |
| 0.0789 | 1100 | 0.0344 | - |
| 0.0796 | 1110 | 0.0277 | - |
| 0.0804 | 1120 | 0.0453 | - |
| 0.0811 | 1130 | 0.0469 | - |
| 0.0818 | 1140 | 0.044 | - |
| 0.0825 | 1150 | 0.0339 | - |
| 0.0832 | 1160 | 0.0331 | - |
| 0.0839 | 1170 | 0.0327 | - |
| 0.0847 | 1180 | 0.0445 | - |
| 0.0854 | 1190 | 0.0392 | - |
| 0.0861 | 1200 | 0.048 | 0.0531 |
| 0.0868 | 1210 | 0.0375 | - |
| 0.0875 | 1220 | 0.0311 | - |
| 0.0882 | 1230 | 0.037 | - |
| 0.0890 | 1240 | 0.0369 | - |
| 0.0897 | 1250 | 0.0398 | - |
| 0.0904 | 1260 | 0.0363 | - |
| 0.0911 | 1270 | 0.0425 | - |
| 0.0918 | 1280 | 0.0355 | - |
| 0.0926 | 1290 | 0.0362 | - |
| 0.0933 | 1300 | 0.0354 | - |
| 0.0940 | 1310 | 0.0252 | - |
| 0.0947 | 1320 | 0.03 | - |
| 0.0954 | 1330 | 0.03 | - |
| 0.0961 | 1340 | 0.0218 | - |
| 0.0969 | 1350 | 0.0248 | 0.0448 |
| 0.0976 | 1360 | 0.0287 | - |
| 0.0983 | 1370 | 0.0322 | - |
| 0.0990 | 1380 | 0.0359 | - |
| 0.0997 | 1390 | 0.0287 | - |
| 0.1004 | 1400 | 0.034 | - |
| 0.1012 | 1410 | 0.0305 | - |
| 0.1019 | 1420 | 0.0241 | - |
| 0.1026 | 1430 | 0.0227 | - |
| 0.1033 | 1440 | 0.0279 | - |
| 0.1040 | 1450 | 0.0298 | - |
| 0.1047 | 1460 | 0.0382 | - |
| 0.1055 | 1470 | 0.0269 | - |
| 0.1062 | 1480 | 0.0243 | - |
| 0.1069 | 1490 | 0.0266 | - |
| 0.1076 | 1500 | 0.016 | 0.0456 |
| 0.1083 | 1510 | 0.0233 | - |
| 0.1091 | 1520 | 0.0247 | - |
| 0.1098 | 1530 | 0.0233 | - |
| 0.1105 | 1540 | 0.0214 | - |
| 0.1112 | 1550 | 0.0229 | - |
| 0.1119 | 1560 | 0.0145 | - |
| 0.1126 | 1570 | 0.0187 | - |
| 0.1134 | 1580 | 0.0231 | - |
| 0.1141 | 1590 | 0.0268 | - |
| 0.1148 | 1600 | 0.027 | - |
| 0.1155 | 1610 | 0.0354 | - |
| 0.1162 | 1620 | 0.0398 | - |
| 0.1169 | 1630 | 0.0431 | - |
| 0.1177 | 1640 | 0.0391 | - |
| 0.1184 | 1650 | 0.056 | 0.0422 |
| 0.1191 | 1660 | 0.0394 | - |
| 0.1198 | 1670 | 0.0338 | - |
| 0.1205 | 1680 | 0.0417 | - |
| 0.1213 | 1690 | 0.0479 | - |
| 0.1220 | 1700 | 0.0348 | - |
| 0.1227 | 1710 | 0.0366 | - |
| 0.1234 | 1720 | 0.0335 | - |
| 0.1241 | 1730 | 0.0367 | - |
| 0.1248 | 1740 | 0.0355 | - |
| 0.1256 | 1750 | 0.0319 | - |
| 0.1263 | 1760 | 0.0324 | - |
| 0.1270 | 1770 | 0.0345 | - |
| 0.1277 | 1780 | 0.0352 | - |
| 0.1284 | 1790 | 0.0362 | - |
| 0.1291 | 1800 | 0.0319 | 0.0388 |
| 0.1299 | 1810 | 0.0381 | - |
| 0.1306 | 1820 | 0.0368 | - |
| 0.1313 | 1830 | 0.0318 | - |
| 0.1320 | 1840 | 0.0296 | - |
| 0.1327 | 1850 | 0.0331 | - |
| 0.1334 | 1860 | 0.0381 | - |
| 0.1342 | 1870 | 0.0237 | - |
| 0.1349 | 1880 | 0.0284 | - |
| 0.1356 | 1890 | 0.0353 | - |
| 0.1363 | 1900 | 0.0408 | - |
| 0.1370 | 1910 | 0.0277 | - |
| 0.1378 | 1920 | 0.0268 | - |
| 0.1385 | 1930 | 0.0348 | - |
| 0.1392 | 1940 | 0.0348 | - |
| 0.1399 | 1950 | 0.0269 | 0.0351 |
| 0.1406 | 1960 | 0.0317 | - |
| 0.1413 | 1970 | 0.0375 | - |
| 0.1421 | 1980 | 0.0321 | - |
| 0.1428 | 1990 | 0.0302 | - |
| 0.1435 | 2000 | 0.0411 | - |
| 0.1442 | 2010 | 0.0329 | - |
| 0.1449 | 2020 | 0.0246 | - |
| 0.1456 | 2030 | 0.0288 | - |
| 0.1464 | 2040 | 0.0297 | - |
| 0.1471 | 2050 | 0.0285 | - |
| 0.1478 | 2060 | 0.0326 | - |
| 0.1485 | 2070 | 0.035 | - |
| 0.1492 | 2080 | 0.0208 | - |
| 0.1499 | 2090 | 0.0209 | - |
| 0.1507 | 2100 | 0.0303 | 0.0364 |
| 0.1514 | 2110 | 0.028 | - |
| 0.1521 | 2120 | 0.0366 | - |
| 0.1528 | 2130 | 0.033 | - |
| 0.1535 | 2140 | 0.0353 | - |
| 0.1543 | 2150 | 0.0292 | - |
| 0.1550 | 2160 | 0.0202 | - |
| 0.1557 | 2170 | 0.0233 | - |
| 0.1564 | 2180 | 0.0346 | - |
| 0.1571 | 2190 | 0.0309 | - |
| 0.1578 | 2200 | 0.0241 | - |
| 0.1586 | 2210 | 0.0317 | - |
| 0.1593 | 2220 | 0.0331 | - |
| 0.1600 | 2230 | 0.0273 | - |
| 0.1607 | 2240 | 0.0219 | - |
| 0.1614 | 2250 | 0.0256 | 0.0338 |
| 0.1621 | 2260 | 0.024 | - |
| 0.1629 | 2270 | 0.0216 | - |
| 0.1636 | 2280 | 0.0253 | - |
| 0.1643 | 2290 | 0.0271 | - |
| 0.1650 | 2300 | 0.0336 | - |
| 0.1657 | 2310 | 0.0273 | - |
| 0.1665 | 2320 | 0.0267 | - |
| 0.1672 | 2330 | 0.0292 | - |
| 0.1679 | 2340 | 0.0269 | - |
| 0.1686 | 2350 | 0.0342 | - |
| 0.1693 | 2360 | 0.0361 | - |
| 0.1700 | 2370 | 0.0224 | - |
| 0.1708 | 2380 | 0.016 | - |
| 0.1715 | 2390 | 0.0209 | - |
| **0.1722** | **2400** | **0.0162** | **0.0329** |
| 0.1729 | 2410 | 0.019 | - |
| 0.1736 | 2420 | 0.0181 | - |
| 0.1743 | 2430 | 0.0332 | - |
| 0.1751 | 2440 | 0.0189 | - |
| 0.1758 | 2450 | 0.0292 | - |
| 0.1765 | 2460 | 0.0193 | - |
| 0.1772 | 2470 | 0.024 | - |
| 0.1779 | 2480 | 0.0245 | - |
| 0.1786 | 2490 | 0.02 | - |
| 0.1794 | 2500 | 0.0187 | - |
| 0.1801 | 2510 | 0.0224 | - |
| 0.1808 | 2520 | 0.028 | - |
| 0.1815 | 2530 | 0.0219 | - |
| 0.1822 | 2540 | 0.0212 | - |
| 0.1830 | 2550 | 0.0208 | 0.0354 |
| 0.1837 | 2560 | 0.0299 | - |
| 0.1844 | 2570 | 0.0208 | - |
| 0.1851 | 2580 | 0.0271 | - |
| 0.1858 | 2590 | 0.0146 | - |
| 0.1865 | 2600 | 0.0163 | - |
| 0.1873 | 2610 | 0.0149 | - |
| 0.1880 | 2620 | 0.0145 | - |
| 0.1887 | 2630 | 0.0196 | - |
| 0.1894 | 2640 | 0.0252 | - |
| 0.1901 | 2650 | 0.0173 | - |
| 0.1908 | 2660 | 0.0202 | - |
| 0.1916 | 2670 | 0.0171 | - |
| 0.1923 | 2680 | 0.023 | - |
| 0.1930 | 2690 | 0.02 | - |
| 0.1937 | 2700 | 0.0171 | 0.0387 |
| 0.1944 | 2710 | 0.0239 | - |
| 0.1951 | 2720 | 0.0261 | - |
| 0.1959 | 2730 | 0.0197 | - |
| 0.1966 | 2740 | 0.0195 | - |
| 0.1973 | 2750 | 0.0132 | - |
| 0.1980 | 2760 | 0.0226 | - |
| 0.1987 | 2770 | 0.0172 | - |
| 0.1995 | 2780 | 0.0214 | - |
| 0.2002 | 2790 | 0.0173 | - |
| 0.2009 | 2800 | 0.0179 | - |
| 0.2016 | 2810 | 0.0184 | - |
| 0.2023 | 2820 | 0.0162 | - |
| 0.2030 | 2830 | 0.0126 | - |
| 0.2038 | 2840 | 0.0188 | - |
| 0.2045 | 2850 | 0.0177 | 0.0385 |
| 0.2052 | 2860 | 0.0153 | - |
| 0.2059 | 2870 | 0.0137 | - |
| 0.2066 | 2880 | 0.0174 | - |
| 0.2073 | 2890 | 0.0148 | - |
| 0.2081 | 2900 | 0.0194 | - |
| 0.2088 | 2910 | 0.0133 | - |
| 0.2095 | 2920 | 0.0152 | - |
| 0.2102 | 2930 | 0.0179 | - |
| 0.2109 | 2940 | 0.0187 | - |
| 0.2117 | 2950 | 0.0161 | - |
| 0.2124 | 2960 | 0.0185 | - |
| 0.2131 | 2970 | 0.0151 | - |
| 0.2138 | 2980 | 0.0194 | - |
| 0.2145 | 2990 | 0.0179 | - |
| 0.2152 | 3000 | 0.0115 | 0.0434 |
| 0.2160 | 3010 | 0.0183 | - |
| 0.2167 | 3020 | 0.0168 | - |
| 0.2174 | 3030 | 0.0137 | - |
| 0.2181 | 3040 | 0.0196 | - |
| 0.2188 | 3050 | 0.0145 | - |
| 0.2195 | 3060 | 0.0164 | - |
| 0.2203 | 3070 | 0.0132 | - |
| 0.2210 | 3080 | 0.0133 | - |
| 0.2217 | 3090 | 0.0154 | - |
| 0.2224 | 3100 | 0.0145 | - |
| 0.2231 | 3110 | 0.0135 | - |
| 0.2238 | 3120 | 0.0143 | - |
| 0.2246 | 3130 | 0.0197 | - |
| 0.2253 | 3140 | 0.0163 | - |
| 0.2260 | 3150 | 0.017 | 0.0409 |
| 0.2267 | 3160 | 0.0257 | - |
| 0.2274 | 3170 | 0.0165 | - |
| 0.2282 | 3180 | 0.0189 | - |
| 0.2289 | 3190 | 0.0207 | - |
| 0.2296 | 3200 | 0.0179 | - |
| 0.2303 | 3210 | 0.0152 | - |
| 0.2310 | 3220 | 0.0179 | - |
| 0.2317 | 3230 | 0.0187 | - |
| 0.2325 | 3240 | 0.0189 | - |
| 0.2332 | 3250 | 0.0151 | - |
| 0.2339 | 3260 | 0.0137 | - |
| 0.2346 | 3270 | 0.0122 | - |
| 0.2353 | 3280 | 0.0134 | - |
| 0.2360 | 3290 | 0.0134 | - |
| 0.2368 | 3300 | 0.0162 | 0.0455 |
| 0.2375 | 3310 | 0.0142 | - |
| 0.2382 | 3320 | 0.0203 | - |
| 0.2389 | 3330 | 0.0148 | - |
| 0.2396 | 3340 | 0.0181 | - |
| 0.2404 | 3350 | 0.0178 | - |
| 0.2411 | 3360 | 0.018 | - |
| 0.2418 | 3370 | 0.012 | - |
| 0.2425 | 3380 | 0.0201 | - |
| 0.2432 | 3390 | 0.0193 | - |
| 0.2439 | 3400 | 0.013 | - |
| 0.2447 | 3410 | 0.0114 | - |
| 0.2454 | 3420 | 0.0184 | - |
| 0.2461 | 3430 | 0.0153 | - |
| 0.2468 | 3440 | 0.0172 | - |
| 0.2475 | 3450 | 0.0141 | 0.0458 |
| 0.2482 | 3460 | 0.0165 | - |
| 0.2490 | 3470 | 0.0174 | - |
| 0.2497 | 3480 | 0.0109 | - |
| 0.2504 | 3490 | 0.0177 | - |
| 0.2511 | 3500 | 0.0178 | - |
| 0.2518 | 3510 | 0.0101 | - |
| 0.2525 | 3520 | 0.0154 | - |
| 0.2533 | 3530 | 0.0204 | - |
| 0.2540 | 3540 | 0.0119 | - |
| 0.2547 | 3550 | 0.0158 | - |
| 0.2554 | 3560 | 0.0161 | - |
| 0.2561 | 3570 | 0.0116 | - |
| 0.2569 | 3580 | 0.0183 | - |
| 0.2576 | 3590 | 0.0125 | - |
| 0.2583 | 3600 | 0.0148 | 0.0475 |
| 0.2590 | 3610 | 0.0184 | - |
| 0.2597 | 3620 | 0.0169 | - |
| 0.2604 | 3630 | 0.0135 | - |
| 0.2612 | 3640 | 0.0138 | - |
| 0.2619 | 3650 | 0.0133 | - |
| 0.2626 | 3660 | 0.0148 | - |
| 0.2633 | 3670 | 0.0121 | - |
| 0.2640 | 3680 | 0.0119 | - |
| 0.2647 | 3690 | 0.0183 | - |
| 0.2655 | 3700 | 0.0178 | - |
| 0.2662 | 3710 | 0.0139 | - |
| 0.2669 | 3720 | 0.0171 | - |
| 0.2676 | 3730 | 0.0155 | - |
| 0.2683 | 3740 | 0.0173 | - |
| 0.2690 | 3750 | 0.0158 | 0.0488 |
| 0.2698 | 3760 | 0.0125 | - |
| 0.2705 | 3770 | 0.0137 | - |
| 0.2712 | 3780 | 0.0184 | - |
| 0.2719 | 3790 | 0.0117 | - |
| 0.2726 | 3800 | 0.0189 | - |
| 0.2734 | 3810 | 0.0163 | - |
| 0.2741 | 3820 | 0.0163 | - |
| 0.2748 | 3830 | 0.0212 | - |
| 0.2755 | 3840 | 0.0133 | - |
| 0.2762 | 3850 | 0.0168 | - |
| 0.2769 | 3860 | 0.0197 | - |
| 0.2777 | 3870 | 0.0184 | - |
| 0.2784 | 3880 | 0.017 | - |
| 0.2791 | 3890 | 0.0173 | - |
| 0.2798 | 3900 | 0.0165 | 0.0511 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.2
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Snowflake/snowflake-arctic-embed-l-v2.0", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "loss:CosineSimilarityLoss"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,423 |
HussienAhmad/SFT_GradProject
|
HussienAhmad
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | 2024-11-21T00:27:39Z |
2025-03-01T17:47:18+00:00
| 81 | 0 |
---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: The chatbot understood my queries perfectly.
- text: who are you
- text: I love the interface of the website.
- text: au revoir
- text: I like how personalized the recommendations feel.
inference: true
model-index:
- name: SetFit with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9047619047619048
name: Accuracy
---
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| greet-who_are_you | <ul><li>'can you answer any question?'</li><li>'how smart are you?'</li><li>'are you a chatbot?'</li></ul> |
| general-questions | <ul><li>'how does adaptive learning personalize my experience'</li><li>'what is the significance of feedback in online learning'</li><li>'what is the role of certifications in career development'</li></ul> |
| feedback | <ul><li>'Excellent design and usability!'</li><li>'The course variety is amazing!'</li><li>'I experienced technical issues during live sessions.'</li></ul> |
| greet-good_bye | <ul><li>'gotta go'</li><li>'catch you later'</li><li>'take it easy'</li></ul> |
| greet-hi | <ul><li>'Coucou!'</li><li>'Salutations!'</li><li>'Salut'</li></ul> |
| recommendations | <ul><li>'what courses help with productivity skills'</li><li>'can you recommend some quick short-term courses'</li><li>'can you recommend some career-oriented courses'</li></ul> |
| website-information | <ul><li>'how to enable or disable notifications'</li><li>'where can I update my payment information'</li><li>'how to contact customer service'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9048 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("HussienAhmad/SFT_GradProject")
# Run inference
preds = model("au revoir")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 5.1950 | 11 |
| Label | Training Sample Count |
|:--------------------|:----------------------|
| greet-hi | 39 |
| greet-who_are_you | 37 |
| greet-good_bye | 38 |
| general-questions | 43 |
| recommendations | 45 |
| website-information | 43 |
| feedback | 37 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.552 | - |
| 0.0029 | 50 | 0.2704 | - |
| 0.0059 | 100 | 0.2701 | - |
| 0.0088 | 150 | 0.2926 | - |
| 0.0117 | 200 | 0.2647 | - |
| 0.0147 | 250 | 0.2741 | - |
| 0.0176 | 300 | 0.2343 | - |
| 0.0206 | 350 | 0.2427 | - |
| 0.0235 | 400 | 0.241 | - |
| 0.0264 | 450 | 0.2624 | - |
| 0.0294 | 500 | 0.2097 | - |
| 0.0323 | 550 | 0.2112 | - |
| 0.0352 | 600 | 0.1741 | - |
| 0.0382 | 650 | 0.1871 | - |
| 0.0411 | 700 | 0.1554 | - |
| 0.0441 | 750 | 0.175 | - |
| 0.0470 | 800 | 0.1497 | - |
| 0.0499 | 850 | 0.1465 | - |
| 0.0529 | 900 | 0.1362 | - |
| 0.0558 | 950 | 0.1324 | - |
| 0.0587 | 1000 | 0.1349 | - |
| 0.0617 | 1050 | 0.1222 | - |
| 0.0646 | 1100 | 0.1129 | - |
| 0.0675 | 1150 | 0.1072 | - |
| 0.0705 | 1200 | 0.105 | - |
| 0.0734 | 1250 | 0.0929 | - |
| 0.0764 | 1300 | 0.0975 | - |
| 0.0793 | 1350 | 0.0849 | - |
| 0.0822 | 1400 | 0.0764 | - |
| 0.0852 | 1450 | 0.0579 | - |
| 0.0881 | 1500 | 0.0719 | - |
| 0.0910 | 1550 | 0.0567 | - |
| 0.0940 | 1600 | 0.0431 | - |
| 0.0969 | 1650 | 0.0564 | - |
| 0.0999 | 1700 | 0.0439 | - |
| 0.1028 | 1750 | 0.045 | - |
| 0.1057 | 1800 | 0.0584 | - |
| 0.1087 | 1850 | 0.0315 | - |
| 0.1116 | 1900 | 0.045 | - |
| 0.1145 | 1950 | 0.0247 | - |
| 0.1175 | 2000 | 0.0329 | - |
| 0.1204 | 2050 | 0.0261 | - |
| 0.1233 | 2100 | 0.0332 | - |
| 0.1263 | 2150 | 0.0319 | - |
| 0.1292 | 2200 | 0.0195 | - |
| 0.1322 | 2250 | 0.0344 | - |
| 0.1351 | 2300 | 0.0147 | - |
| 0.1380 | 2350 | 0.0157 | - |
| 0.1410 | 2400 | 0.0147 | - |
| 0.1439 | 2450 | 0.0186 | - |
| 0.1468 | 2500 | 0.0099 | - |
| 0.1498 | 2550 | 0.0146 | - |
| 0.1527 | 2600 | 0.0103 | - |
| 0.1557 | 2650 | 0.0221 | - |
| 0.1586 | 2700 | 0.0135 | - |
| 0.1615 | 2750 | 0.0145 | - |
| 0.1645 | 2800 | 0.0169 | - |
| 0.1674 | 2850 | 0.0133 | - |
| 0.1703 | 2900 | 0.0109 | - |
| 0.1733 | 2950 | 0.0096 | - |
| 0.1762 | 3000 | 0.0197 | - |
| 0.1791 | 3050 | 0.0086 | - |
| 0.1821 | 3100 | 0.0074 | - |
| 0.1850 | 3150 | 0.008 | - |
| 0.1880 | 3200 | 0.0077 | - |
| 0.1909 | 3250 | 0.0033 | - |
| 0.1938 | 3300 | 0.004 | - |
| 0.1968 | 3350 | 0.0029 | - |
| 0.1997 | 3400 | 0.0032 | - |
| 0.2026 | 3450 | 0.0123 | - |
| 0.2056 | 3500 | 0.0092 | - |
| 0.2085 | 3550 | 0.0028 | - |
| 0.2115 | 3600 | 0.005 | - |
| 0.2144 | 3650 | 0.0042 | - |
| 0.2173 | 3700 | 0.004 | - |
| 0.2203 | 3750 | 0.003 | - |
| 0.2232 | 3800 | 0.0093 | - |
| 0.2261 | 3850 | 0.0016 | - |
| 0.2291 | 3900 | 0.0075 | - |
| 0.2320 | 3950 | 0.0018 | - |
| 0.2349 | 4000 | 0.0019 | - |
| 0.2379 | 4050 | 0.0013 | - |
| 0.2408 | 4100 | 0.0054 | - |
| 0.2438 | 4150 | 0.0023 | - |
| 0.2467 | 4200 | 0.0015 | - |
| 0.2496 | 4250 | 0.006 | - |
| 0.2526 | 4300 | 0.0093 | - |
| 0.2555 | 4350 | 0.0026 | - |
| 0.2584 | 4400 | 0.0009 | - |
| 0.2614 | 4450 | 0.0055 | - |
| 0.2643 | 4500 | 0.0032 | - |
| 0.2673 | 4550 | 0.0043 | - |
| 0.2702 | 4600 | 0.005 | - |
| 0.2731 | 4650 | 0.0043 | - |
| 0.2761 | 4700 | 0.001 | - |
| 0.2790 | 4750 | 0.0081 | - |
| 0.2819 | 4800 | 0.0039 | - |
| 0.2849 | 4850 | 0.0037 | - |
| 0.2878 | 4900 | 0.0029 | - |
| 0.2907 | 4950 | 0.0019 | - |
| 0.2937 | 5000 | 0.0085 | - |
| 0.2966 | 5050 | 0.0024 | - |
| 0.2996 | 5100 | 0.0008 | - |
| 0.3025 | 5150 | 0.0006 | - |
| 0.3054 | 5200 | 0.0008 | - |
| 0.3084 | 5250 | 0.0047 | - |
| 0.3113 | 5300 | 0.0045 | - |
| 0.3142 | 5350 | 0.0051 | - |
| 0.3172 | 5400 | 0.0019 | - |
| 0.3201 | 5450 | 0.0005 | - |
| 0.3231 | 5500 | 0.0052 | - |
| 0.3260 | 5550 | 0.0095 | - |
| 0.3289 | 5600 | 0.0027 | - |
| 0.3319 | 5650 | 0.0089 | - |
| 0.3348 | 5700 | 0.0014 | - |
| 0.3377 | 5750 | 0.0005 | - |
| 0.3407 | 5800 | 0.0092 | - |
| 0.3436 | 5850 | 0.0005 | - |
| 0.3465 | 5900 | 0.0054 | - |
| 0.3495 | 5950 | 0.0072 | - |
| 0.3524 | 6000 | 0.012 | - |
| 0.3554 | 6050 | 0.0029 | - |
| 0.3583 | 6100 | 0.0016 | - |
| 0.3612 | 6150 | 0.0005 | - |
| 0.3642 | 6200 | 0.0006 | - |
| 0.3671 | 6250 | 0.0061 | - |
| 0.3700 | 6300 | 0.0007 | - |
| 0.3730 | 6350 | 0.0039 | - |
| 0.3759 | 6400 | 0.0061 | - |
| 0.3789 | 6450 | 0.004 | - |
| 0.3818 | 6500 | 0.0005 | - |
| 0.3847 | 6550 | 0.0027 | - |
| 0.3877 | 6600 | 0.0007 | - |
| 0.3906 | 6650 | 0.0051 | - |
| 0.3935 | 6700 | 0.0005 | - |
| 0.3965 | 6750 | 0.0049 | - |
| 0.3994 | 6800 | 0.007 | - |
| 0.4023 | 6850 | 0.0022 | - |
| 0.4053 | 6900 | 0.0033 | - |
| 0.4082 | 6950 | 0.0027 | - |
| 0.4112 | 7000 | 0.0004 | - |
| 0.4141 | 7050 | 0.0049 | - |
| 0.4170 | 7100 | 0.0031 | - |
| 0.4200 | 7150 | 0.0014 | - |
| 0.4229 | 7200 | 0.0007 | - |
| 0.4258 | 7250 | 0.0021 | - |
| 0.4288 | 7300 | 0.002 | - |
| 0.4317 | 7350 | 0.0015 | - |
| 0.4347 | 7400 | 0.0004 | - |
| 0.4376 | 7450 | 0.0028 | - |
| 0.4405 | 7500 | 0.0073 | - |
| 0.4435 | 7550 | 0.0006 | - |
| 0.4464 | 7600 | 0.0061 | - |
| 0.4493 | 7650 | 0.0062 | - |
| 0.4523 | 7700 | 0.0017 | - |
| 0.4552 | 7750 | 0.0012 | - |
| 0.4581 | 7800 | 0.0017 | - |
| 0.4611 | 7850 | 0.0013 | - |
| 0.4640 | 7900 | 0.0004 | - |
| 0.4670 | 7950 | 0.0013 | - |
| 0.4699 | 8000 | 0.0004 | - |
| 0.4728 | 8050 | 0.0021 | - |
| 0.4758 | 8100 | 0.0011 | - |
| 0.4787 | 8150 | 0.0012 | - |
| 0.4816 | 8200 | 0.0023 | - |
| 0.4846 | 8250 | 0.0027 | - |
| 0.4875 | 8300 | 0.001 | - |
| 0.4905 | 8350 | 0.0047 | - |
| 0.4934 | 8400 | 0.0011 | - |
| 0.4963 | 8450 | 0.0005 | - |
| 0.4993 | 8500 | 0.0008 | - |
| 0.5022 | 8550 | 0.0018 | - |
| 0.5051 | 8600 | 0.0003 | - |
| 0.5081 | 8650 | 0.0005 | - |
| 0.5110 | 8700 | 0.0018 | - |
| 0.5140 | 8750 | 0.0024 | - |
| 0.5169 | 8800 | 0.0006 | - |
| 0.5198 | 8850 | 0.001 | - |
| 0.5228 | 8900 | 0.0015 | - |
| 0.5257 | 8950 | 0.0003 | - |
| 0.5286 | 9000 | 0.0007 | - |
| 0.5316 | 9050 | 0.0003 | - |
| 0.5345 | 9100 | 0.0024 | - |
| 0.5374 | 9150 | 0.0005 | - |
| 0.5404 | 9200 | 0.0012 | - |
| 0.5433 | 9250 | 0.0152 | - |
| 0.5463 | 9300 | 0.0012 | - |
| 0.5492 | 9350 | 0.0023 | - |
| 0.5521 | 9400 | 0.0013 | - |
| 0.5551 | 9450 | 0.0006 | - |
| 0.5580 | 9500 | 0.0024 | - |
| 0.5609 | 9550 | 0.0026 | - |
| 0.5639 | 9600 | 0.0006 | - |
| 0.5668 | 9650 | 0.0021 | - |
| 0.5698 | 9700 | 0.0008 | - |
| 0.5727 | 9750 | 0.0003 | - |
| 0.5756 | 9800 | 0.0016 | - |
| 0.5786 | 9850 | 0.0004 | - |
| 0.5815 | 9900 | 0.0018 | - |
| 0.5844 | 9950 | 0.0039 | - |
| 0.5874 | 10000 | 0.002 | - |
| 0.5903 | 10050 | 0.001 | - |
| 0.5932 | 10100 | 0.0005 | - |
| 0.5962 | 10150 | 0.0034 | - |
| 0.5991 | 10200 | 0.0003 | - |
| 0.6021 | 10250 | 0.0002 | - |
| 0.6050 | 10300 | 0.0022 | - |
| 0.6079 | 10350 | 0.0016 | - |
| 0.6109 | 10400 | 0.0015 | - |
| 0.6138 | 10450 | 0.0016 | - |
| 0.6167 | 10500 | 0.0023 | - |
| 0.6197 | 10550 | 0.0004 | - |
| 0.6226 | 10600 | 0.0022 | - |
| 0.6256 | 10650 | 0.0018 | - |
| 0.6285 | 10700 | 0.0003 | - |
| 0.6314 | 10750 | 0.0002 | - |
| 0.6344 | 10800 | 0.0009 | - |
| 0.6373 | 10850 | 0.0003 | - |
| 0.6402 | 10900 | 0.0016 | - |
| 0.6432 | 10950 | 0.0035 | - |
| 0.6461 | 11000 | 0.0012 | - |
| 0.6490 | 11050 | 0.0013 | - |
| 0.6520 | 11100 | 0.0036 | - |
| 0.6549 | 11150 | 0.0003 | - |
| 0.6579 | 11200 | 0.0008 | - |
| 0.6608 | 11250 | 0.0006 | - |
| 0.6637 | 11300 | 0.0005 | - |
| 0.6667 | 11350 | 0.0003 | - |
| 0.6696 | 11400 | 0.0039 | - |
| 0.6725 | 11450 | 0.0061 | - |
| 0.6755 | 11500 | 0.0003 | - |
| 0.6784 | 11550 | 0.0002 | - |
| 0.6814 | 11600 | 0.002 | - |
| 0.6843 | 11650 | 0.0045 | - |
| 0.6872 | 11700 | 0.0003 | - |
| 0.6902 | 11750 | 0.0003 | - |
| 0.6931 | 11800 | 0.0019 | - |
| 0.6960 | 11850 | 0.0042 | - |
| 0.6990 | 11900 | 0.0009 | - |
| 0.7019 | 11950 | 0.0034 | - |
| 0.7048 | 12000 | 0.0009 | - |
| 0.7078 | 12050 | 0.0018 | - |
| 0.7107 | 12100 | 0.0013 | - |
| 0.7137 | 12150 | 0.0007 | - |
| 0.7166 | 12200 | 0.0002 | - |
| 0.7195 | 12250 | 0.0012 | - |
| 0.7225 | 12300 | 0.0024 | - |
| 0.7254 | 12350 | 0.0007 | - |
| 0.7283 | 12400 | 0.0001 | - |
| 0.7313 | 12450 | 0.0002 | - |
| 0.7342 | 12500 | 0.0002 | - |
| 0.7372 | 12550 | 0.0021 | - |
| 0.7401 | 12600 | 0.0017 | - |
| 0.7430 | 12650 | 0.0004 | - |
| 0.7460 | 12700 | 0.0007 | - |
| 0.7489 | 12750 | 0.0002 | - |
| 0.7518 | 12800 | 0.0033 | - |
| 0.7548 | 12850 | 0.0004 | - |
| 0.7577 | 12900 | 0.0011 | - |
| 0.7606 | 12950 | 0.0039 | - |
| 0.7636 | 13000 | 0.0005 | - |
| 0.7665 | 13050 | 0.0009 | - |
| 0.7695 | 13100 | 0.0021 | - |
| 0.7724 | 13150 | 0.0035 | - |
| 0.7753 | 13200 | 0.0036 | - |
| 0.7783 | 13250 | 0.0005 | - |
| 0.7812 | 13300 | 0.0002 | - |
| 0.7841 | 13350 | 0.0006 | - |
| 0.7871 | 13400 | 0.0002 | - |
| 0.7900 | 13450 | 0.0002 | - |
| 0.7930 | 13500 | 0.004 | - |
| 0.7959 | 13550 | 0.0008 | - |
| 0.7988 | 13600 | 0.0009 | - |
| 0.8018 | 13650 | 0.0032 | - |
| 0.8047 | 13700 | 0.001 | - |
| 0.8076 | 13750 | 0.0013 | - |
| 0.8106 | 13800 | 0.0021 | - |
| 0.8135 | 13850 | 0.0006 | - |
| 0.8164 | 13900 | 0.0023 | - |
| 0.8194 | 13950 | 0.0004 | - |
| 0.8223 | 14000 | 0.0002 | - |
| 0.8253 | 14050 | 0.0051 | - |
| 0.8282 | 14100 | 0.0014 | - |
| 0.8311 | 14150 | 0.0008 | - |
| 0.8341 | 14200 | 0.0027 | - |
| 0.8370 | 14250 | 0.0013 | - |
| 0.8399 | 14300 | 0.0017 | - |
| 0.8429 | 14350 | 0.0015 | - |
| 0.8458 | 14400 | 0.0002 | - |
| 0.8488 | 14450 | 0.0002 | - |
| 0.8517 | 14500 | 0.0033 | - |
| 0.8546 | 14550 | 0.0005 | - |
| 0.8576 | 14600 | 0.0005 | - |
| 0.8605 | 14650 | 0.0047 | - |
| 0.8634 | 14700 | 0.0012 | - |
| 0.8664 | 14750 | 0.0013 | - |
| 0.8693 | 14800 | 0.0034 | - |
| 0.8722 | 14850 | 0.0009 | - |
| 0.8752 | 14900 | 0.0054 | - |
| 0.8781 | 14950 | 0.0006 | - |
| 0.8811 | 15000 | 0.0003 | - |
| 0.8840 | 15050 | 0.002 | - |
| 0.8869 | 15100 | 0.0017 | - |
| 0.8899 | 15150 | 0.0002 | - |
| 0.8928 | 15200 | 0.0043 | - |
| 0.8957 | 15250 | 0.0012 | - |
| 0.8987 | 15300 | 0.0005 | - |
| 0.9016 | 15350 | 0.0012 | - |
| 0.9046 | 15400 | 0.0007 | - |
| 0.9075 | 15450 | 0.0002 | - |
| 0.9104 | 15500 | 0.0027 | - |
| 0.9134 | 15550 | 0.0002 | - |
| 0.9163 | 15600 | 0.0012 | - |
| 0.9192 | 15650 | 0.0039 | - |
| 0.9222 | 15700 | 0.001 | - |
| 0.9251 | 15750 | 0.0038 | - |
| 0.9280 | 15800 | 0.0019 | - |
| 0.9310 | 15850 | 0.0028 | - |
| 0.9339 | 15900 | 0.0001 | - |
| 0.9369 | 15950 | 0.0011 | - |
| 0.9398 | 16000 | 0.001 | - |
| 0.9427 | 16050 | 0.0004 | - |
| 0.9457 | 16100 | 0.0012 | - |
| 0.9486 | 16150 | 0.0019 | - |
| 0.9515 | 16200 | 0.0035 | - |
| 0.9545 | 16250 | 0.005 | - |
| 0.9574 | 16300 | 0.001 | - |
| 0.9604 | 16350 | 0.0021 | - |
| 0.9633 | 16400 | 0.0002 | - |
| 0.9662 | 16450 | 0.0013 | - |
| 0.9692 | 16500 | 0.0002 | - |
| 0.9721 | 16550 | 0.0005 | - |
| 0.9750 | 16600 | 0.0012 | - |
| 0.9780 | 16650 | 0.0009 | - |
| 0.9809 | 16700 | 0.0012 | - |
| 0.9838 | 16750 | 0.0028 | - |
| 0.9868 | 16800 | 0.0046 | - |
| 0.9897 | 16850 | 0.0029 | - |
| 0.9927 | 16900 | 0.0001 | - |
| 0.9956 | 16950 | 0.0043 | - |
| 0.9985 | 17000 | 0.0038 | - |
| 1.0 | 17025 | - | 0.0476 |
| 1.0015 | 17050 | 0.0007 | - |
| 1.0044 | 17100 | 0.0001 | - |
| 1.0073 | 17150 | 0.0001 | - |
| 1.0103 | 17200 | 0.0011 | - |
| 1.0132 | 17250 | 0.0023 | - |
| 1.0162 | 17300 | 0.0013 | - |
| 1.0191 | 17350 | 0.0007 | - |
| 1.0220 | 17400 | 0.0004 | - |
| 1.0250 | 17450 | 0.0001 | - |
| 1.0279 | 17500 | 0.0021 | - |
| 1.0308 | 17550 | 0.0001 | - |
| 1.0338 | 17600 | 0.0005 | - |
| 1.0367 | 17650 | 0.0009 | - |
| 1.0396 | 17700 | 0.0001 | - |
| 1.0426 | 17750 | 0.0018 | - |
| 1.0455 | 17800 | 0.0012 | - |
| 1.0485 | 17850 | 0.0005 | - |
| 1.0514 | 17900 | 0.0006 | - |
| 1.0543 | 17950 | 0.0005 | - |
| 1.0573 | 18000 | 0.0003 | - |
| 1.0602 | 18050 | 0.0041 | - |
| 1.0631 | 18100 | 0.0004 | - |
| 1.0661 | 18150 | 0.0001 | - |
| 1.0690 | 18200 | 0.0008 | - |
| 1.0720 | 18250 | 0.0002 | - |
| 1.0749 | 18300 | 0.0007 | - |
| 1.0778 | 18350 | 0.0016 | - |
| 1.0808 | 18400 | 0.002 | - |
| 1.0837 | 18450 | 0.0004 | - |
| 1.0866 | 18500 | 0.0016 | - |
| 1.0896 | 18550 | 0.0015 | - |
| 1.0925 | 18600 | 0.0003 | - |
| 1.0954 | 18650 | 0.003 | - |
| 1.0984 | 18700 | 0.0011 | - |
| 1.1013 | 18750 | 0.0019 | - |
| 1.1043 | 18800 | 0.0032 | - |
| 1.1072 | 18850 | 0.0001 | - |
| 1.1101 | 18900 | 0.0005 | - |
| 1.1131 | 18950 | 0.0012 | - |
| 1.1160 | 19000 | 0.0019 | - |
| 1.1189 | 19050 | 0.0001 | - |
| 1.1219 | 19100 | 0.0029 | - |
| 1.1248 | 19150 | 0.005 | - |
| 1.1278 | 19200 | 0.0001 | - |
| 1.1307 | 19250 | 0.0001 | - |
| 1.1336 | 19300 | 0.005 | - |
| 1.1366 | 19350 | 0.0021 | - |
| 1.1395 | 19400 | 0.0014 | - |
| 1.1424 | 19450 | 0.0002 | - |
| 1.1454 | 19500 | 0.0049 | - |
| 1.1483 | 19550 | 0.0002 | - |
| 1.1512 | 19600 | 0.001 | - |
| 1.1542 | 19650 | 0.0027 | - |
| 1.1571 | 19700 | 0.0033 | - |
| 1.1601 | 19750 | 0.0023 | - |
| 1.1630 | 19800 | 0.0009 | - |
| 1.1659 | 19850 | 0.0021 | - |
| 1.1689 | 19900 | 0.0003 | - |
| 1.1718 | 19950 | 0.0009 | - |
| 1.1747 | 20000 | 0.0001 | - |
| 1.1777 | 20050 | 0.0001 | - |
| 1.1806 | 20100 | 0.0007 | - |
| 1.1836 | 20150 | 0.0002 | - |
| 1.1865 | 20200 | 0.0001 | - |
| 1.1894 | 20250 | 0.0013 | - |
| 1.1924 | 20300 | 0.0001 | - |
| 1.1953 | 20350 | 0.0001 | - |
| 1.1982 | 20400 | 0.0016 | - |
| 1.2012 | 20450 | 0.0046 | - |
| 1.2041 | 20500 | 0.0007 | - |
| 1.2070 | 20550 | 0.0035 | - |
| 1.2100 | 20600 | 0.0006 | - |
| 1.2129 | 20650 | 0.0001 | - |
| 1.2159 | 20700 | 0.0007 | - |
| 1.2188 | 20750 | 0.0004 | - |
| 1.2217 | 20800 | 0.0054 | - |
| 1.2247 | 20850 | 0.0012 | - |
| 1.2276 | 20900 | 0.0002 | - |
| 1.2305 | 20950 | 0.0008 | - |
| 1.2335 | 21000 | 0.0001 | - |
| 1.2364 | 21050 | 0.0008 | - |
| 1.2394 | 21100 | 0.0001 | - |
| 1.2423 | 21150 | 0.0039 | - |
| 1.2452 | 21200 | 0.0001 | - |
| 1.2482 | 21250 | 0.0038 | - |
| 1.2511 | 21300 | 0.0012 | - |
| 1.2540 | 21350 | 0.0019 | - |
| 1.2570 | 21400 | 0.0012 | - |
| 1.2599 | 21450 | 0.0001 | - |
| 1.2628 | 21500 | 0.0006 | - |
| 1.2658 | 21550 | 0.0016 | - |
| 1.2687 | 21600 | 0.0001 | - |
| 1.2717 | 21650 | 0.001 | - |
| 1.2746 | 21700 | 0.0005 | - |
| 1.2775 | 21750 | 0.0021 | - |
| 1.2805 | 21800 | 0.0001 | - |
| 1.2834 | 21850 | 0.0003 | - |
| 1.2863 | 21900 | 0.0004 | - |
| 1.2893 | 21950 | 0.0001 | - |
| 1.2922 | 22000 | 0.0003 | - |
| 1.2952 | 22050 | 0.0003 | - |
| 1.2981 | 22100 | 0.0017 | - |
| 1.3010 | 22150 | 0.0004 | - |
| 1.3040 | 22200 | 0.0016 | - |
| 1.3069 | 22250 | 0.0009 | - |
| 1.3098 | 22300 | 0.0006 | - |
| 1.3128 | 22350 | 0.0002 | - |
| 1.3157 | 22400 | 0.0002 | - |
| 1.3186 | 22450 | 0.0006 | - |
| 1.3216 | 22500 | 0.0015 | - |
| 1.3245 | 22550 | 0.0019 | - |
| 1.3275 | 22600 | 0.0004 | - |
| 1.3304 | 22650 | 0.0017 | - |
| 1.3333 | 22700 | 0.0042 | - |
| 1.3363 | 22750 | 0.0011 | - |
| 1.3392 | 22800 | 0.0026 | - |
| 1.3421 | 22850 | 0.0036 | - |
| 1.3451 | 22900 | 0.0014 | - |
| 1.3480 | 22950 | 0.0002 | - |
| 1.3510 | 23000 | 0.0001 | - |
| 1.3539 | 23050 | 0.0001 | - |
| 1.3568 | 23100 | 0.0016 | - |
| 1.3598 | 23150 | 0.0002 | - |
| 1.3627 | 23200 | 0.0022 | - |
| 1.3656 | 23250 | 0.0001 | - |
| 1.3686 | 23300 | 0.0001 | - |
| 1.3715 | 23350 | 0.0011 | - |
| 1.3744 | 23400 | 0.0013 | - |
| 1.3774 | 23450 | 0.0004 | - |
| 1.3803 | 23500 | 0.0013 | - |
| 1.3833 | 23550 | 0.0015 | - |
| 1.3862 | 23600 | 0.0003 | - |
| 1.3891 | 23650 | 0.004 | - |
| 1.3921 | 23700 | 0.0035 | - |
| 1.3950 | 23750 | 0.0019 | - |
| 1.3979 | 23800 | 0.0018 | - |
| 1.4009 | 23850 | 0.0017 | - |
| 1.4038 | 23900 | 0.0001 | - |
| 1.4068 | 23950 | 0.0008 | - |
| 1.4097 | 24000 | 0.0012 | - |
| 1.4126 | 24050 | 0.0001 | - |
| 1.4156 | 24100 | 0.0023 | - |
| 1.4185 | 24150 | 0.0004 | - |
| 1.4214 | 24200 | 0.0001 | - |
| 1.4244 | 24250 | 0.0001 | - |
| 1.4273 | 24300 | 0.0015 | - |
| 1.4302 | 24350 | 0.0005 | - |
| 1.4332 | 24400 | 0.0027 | - |
| 1.4361 | 24450 | 0.0017 | - |
| 1.4391 | 24500 | 0.0001 | - |
| 1.4420 | 24550 | 0.0012 | - |
| 1.4449 | 24600 | 0.0011 | - |
| 1.4479 | 24650 | 0.0035 | - |
| 1.4508 | 24700 | 0.0011 | - |
| 1.4537 | 24750 | 0.0008 | - |
| 1.4567 | 24800 | 0.0062 | - |
| 1.4596 | 24850 | 0.0006 | - |
| 1.4626 | 24900 | 0.0029 | - |
| 1.4655 | 24950 | 0.0001 | - |
| 1.4684 | 25000 | 0.0001 | - |
| 1.4714 | 25050 | 0.0006 | - |
| 1.4743 | 25100 | 0.0008 | - |
| 1.4772 | 25150 | 0.0035 | - |
| 1.4802 | 25200 | 0.0025 | - |
| 1.4831 | 25250 | 0.0001 | - |
| 1.4860 | 25300 | 0.0005 | - |
| 1.4890 | 25350 | 0.001 | - |
| 1.4919 | 25400 | 0.0036 | - |
| 1.4949 | 25450 | 0.0006 | - |
| 1.4978 | 25500 | 0.0001 | - |
| 1.5007 | 25550 | 0.0006 | - |
| 1.5037 | 25600 | 0.0015 | - |
| 1.5066 | 25650 | 0.0033 | - |
| 1.5095 | 25700 | 0.0037 | - |
| 1.5125 | 25750 | 0.0049 | - |
| 1.5154 | 25800 | 0.0007 | - |
| 1.5184 | 25850 | 0.0016 | - |
| 1.5213 | 25900 | 0.0002 | - |
| 1.5242 | 25950 | 0.0001 | - |
| 1.5272 | 26000 | 0.002 | - |
| 1.5301 | 26050 | 0.0006 | - |
| 1.5330 | 26100 | 0.0011 | - |
| 1.5360 | 26150 | 0.0016 | - |
| 1.5389 | 26200 | 0.004 | - |
| 1.5419 | 26250 | 0.0016 | - |
| 1.5448 | 26300 | 0.0066 | - |
| 1.5477 | 26350 | 0.0022 | - |
| 1.5507 | 26400 | 0.0008 | - |
| 1.5536 | 26450 | 0.0047 | - |
| 1.5565 | 26500 | 0.0006 | - |
| 1.5595 | 26550 | 0.0018 | - |
| 1.5624 | 26600 | 0.006 | - |
| 1.5653 | 26650 | 0.0002 | - |
| 1.5683 | 26700 | 0.0004 | - |
| 1.5712 | 26750 | 0.0007 | - |
| 1.5742 | 26800 | 0.0013 | - |
| 1.5771 | 26850 | 0.0013 | - |
| 1.5800 | 26900 | 0.0027 | - |
| 1.5830 | 26950 | 0.0008 | - |
| 1.5859 | 27000 | 0.0004 | - |
| 1.5888 | 27050 | 0.0011 | - |
| 1.5918 | 27100 | 0.0007 | - |
| 1.5947 | 27150 | 0.0035 | - |
| 1.5977 | 27200 | 0.005 | - |
| 1.6006 | 27250 | 0.0018 | - |
| 1.6035 | 27300 | 0.0001 | - |
| 1.6065 | 27350 | 0.0025 | - |
| 1.6094 | 27400 | 0.005 | - |
| 1.6123 | 27450 | 0.0009 | - |
| 1.6153 | 27500 | 0.0009 | - |
| 1.6182 | 27550 | 0.0039 | - |
| 1.6211 | 27600 | 0.0027 | - |
| 1.6241 | 27650 | 0.0018 | - |
| 1.6270 | 27700 | 0.0047 | - |
| 1.6300 | 27750 | 0.0001 | - |
| 1.6329 | 27800 | 0.0001 | - |
| 1.6358 | 27850 | 0.0001 | - |
| 1.6388 | 27900 | 0.0021 | - |
| 1.6417 | 27950 | 0.0002 | - |
| 1.6446 | 28000 | 0.0001 | - |
| 1.6476 | 28050 | 0.0012 | - |
| 1.6505 | 28100 | 0.0018 | - |
| 1.6535 | 28150 | 0.0005 | - |
| 1.6564 | 28200 | 0.0012 | - |
| 1.6593 | 28250 | 0.0049 | - |
| 1.6623 | 28300 | 0.0005 | - |
| 1.6652 | 28350 | 0.0009 | - |
| 1.6681 | 28400 | 0.0009 | - |
| 1.6711 | 28450 | 0.003 | - |
| 1.6740 | 28500 | 0.0007 | - |
| 1.6769 | 28550 | 0.0014 | - |
| 1.6799 | 28600 | 0.0005 | - |
| 1.6828 | 28650 | 0.0033 | - |
| 1.6858 | 28700 | 0.0018 | - |
| 1.6887 | 28750 | 0.0004 | - |
| 1.6916 | 28800 | 0.0014 | - |
| 1.6946 | 28850 | 0.0032 | - |
| 1.6975 | 28900 | 0.0037 | - |
| 1.7004 | 28950 | 0.006 | - |
| 1.7034 | 29000 | 0.0007 | - |
| 1.7063 | 29050 | 0.0004 | - |
| 1.7093 | 29100 | 0.0001 | - |
| 1.7122 | 29150 | 0.0001 | - |
| 1.7151 | 29200 | 0.001 | - |
| 1.7181 | 29250 | 0.001 | - |
| 1.7210 | 29300 | 0.0001 | - |
| 1.7239 | 29350 | 0.0014 | - |
| 1.7269 | 29400 | 0.0011 | - |
| 1.7298 | 29450 | 0.001 | - |
| 1.7327 | 29500 | 0.0005 | - |
| 1.7357 | 29550 | 0.001 | - |
| 1.7386 | 29600 | 0.0001 | - |
| 1.7416 | 29650 | 0.0001 | - |
| 1.7445 | 29700 | 0.0001 | - |
| 1.7474 | 29750 | 0.0006 | - |
| 1.7504 | 29800 | 0.0005 | - |
| 1.7533 | 29850 | 0.0005 | - |
| 1.7562 | 29900 | 0.001 | - |
| 1.7592 | 29950 | 0.0009 | - |
| 1.7621 | 30000 | 0.0008 | - |
| 1.7651 | 30050 | 0.0036 | - |
| 1.7680 | 30100 | 0.0078 | - |
| 1.7709 | 30150 | 0.0044 | - |
| 1.7739 | 30200 | 0.0006 | - |
| 1.7768 | 30250 | 0.0006 | - |
| 1.7797 | 30300 | 0.0007 | - |
| 1.7827 | 30350 | 0.003 | - |
| 1.7856 | 30400 | 0.0007 | - |
| 1.7885 | 30450 | 0.0006 | - |
| 1.7915 | 30500 | 0.0005 | - |
| 1.7944 | 30550 | 0.0041 | - |
| 1.7974 | 30600 | 0.0006 | - |
| 1.8003 | 30650 | 0.0005 | - |
| 1.8032 | 30700 | 0.0017 | - |
| 1.8062 | 30750 | 0.0006 | - |
| 1.8091 | 30800 | 0.0009 | - |
| 1.8120 | 30850 | 0.001 | - |
| 1.8150 | 30900 | 0.004 | - |
| 1.8179 | 30950 | 0.0014 | - |
| 1.8209 | 31000 | 0.0022 | - |
| 1.8238 | 31050 | 0.0012 | - |
| 1.8267 | 31100 | 0.0001 | - |
| 1.8297 | 31150 | 0.001 | - |
| 1.8326 | 31200 | 0.0014 | - |
| 1.8355 | 31250 | 0.0045 | - |
| 1.8385 | 31300 | 0.0012 | - |
| 1.8414 | 31350 | 0.0024 | - |
| 1.8443 | 31400 | 0.0007 | - |
| 1.8473 | 31450 | 0.0011 | - |
| 1.8502 | 31500 | 0.0012 | - |
| 1.8532 | 31550 | 0.0001 | - |
| 1.8561 | 31600 | 0.0006 | - |
| 1.8590 | 31650 | 0.0001 | - |
| 1.8620 | 31700 | 0.0031 | - |
| 1.8649 | 31750 | 0.0005 | - |
| 1.8678 | 31800 | 0.0015 | - |
| 1.8708 | 31850 | 0.0001 | - |
| 1.8737 | 31900 | 0.0009 | - |
| 1.8767 | 31950 | 0.0031 | - |
| 1.8796 | 32000 | 0.0001 | - |
| 1.8825 | 32050 | 0.0008 | - |
| 1.8855 | 32100 | 0.0001 | - |
| 1.8884 | 32150 | 0.0009 | - |
| 1.8913 | 32200 | 0.0006 | - |
| 1.8943 | 32250 | 0.0034 | - |
| 1.8972 | 32300 | 0.0006 | - |
| 1.9001 | 32350 | 0.0025 | - |
| 1.9031 | 32400 | 0.0006 | - |
| 1.9060 | 32450 | 0.0005 | - |
| 1.9090 | 32500 | 0.001 | - |
| 1.9119 | 32550 | 0.0009 | - |
| 1.9148 | 32600 | 0.0011 | - |
| 1.9178 | 32650 | 0.0001 | - |
| 1.9207 | 32700 | 0.0018 | - |
| 1.9236 | 32750 | 0.0001 | - |
| 1.9266 | 32800 | 0.0006 | - |
| 1.9295 | 32850 | 0.0007 | - |
| 1.9325 | 32900 | 0.0001 | - |
| 1.9354 | 32950 | 0.001 | - |
| 1.9383 | 33000 | 0.0024 | - |
| 1.9413 | 33050 | 0.0001 | - |
| 1.9442 | 33100 | 0.0001 | - |
| 1.9471 | 33150 | 0.0014 | - |
| 1.9501 | 33200 | 0.001 | - |
| 1.9530 | 33250 | 0.0007 | - |
| 1.9559 | 33300 | 0.0009 | - |
| 1.9589 | 33350 | 0.0001 | - |
| 1.9618 | 33400 | 0.0007 | - |
| 1.9648 | 33450 | 0.001 | - |
| 1.9677 | 33500 | 0.0001 | - |
| 1.9706 | 33550 | 0.0031 | - |
| 1.9736 | 33600 | 0.0005 | - |
| 1.9765 | 33650 | 0.0001 | - |
| 1.9794 | 33700 | 0.0001 | - |
| 1.9824 | 33750 | 0.0009 | - |
| 1.9853 | 33800 | 0.0008 | - |
| 1.9883 | 33850 | 0.0001 | - |
| 1.9912 | 33900 | 0.0027 | - |
| 1.9941 | 33950 | 0.0039 | - |
| 1.9971 | 34000 | 0.0063 | - |
| 2.0 | 34050 | 0.003 | 0.1022 |
| 2.0029 | 34100 | 0.0001 | - |
| 2.0059 | 34150 | 0.0021 | - |
| 2.0088 | 34200 | 0.0014 | - |
| 2.0117 | 34250 | 0.0001 | - |
| 2.0147 | 34300 | 0.0015 | - |
| 2.0176 | 34350 | 0.0001 | - |
| 2.0206 | 34400 | 0.003 | - |
| 2.0235 | 34450 | 0.0016 | - |
| 2.0264 | 34500 | 0.0022 | - |
| 2.0294 | 34550 | 0.0001 | - |
| 2.0323 | 34600 | 0.0001 | - |
| 2.0352 | 34650 | 0.001 | - |
| 2.0382 | 34700 | 0.0007 | - |
| 2.0411 | 34750 | 0.0009 | - |
| 2.0441 | 34800 | 0.0024 | - |
| 2.0470 | 34850 | 0.0009 | - |
| 2.0499 | 34900 | 0.0007 | - |
| 2.0529 | 34950 | 0.0011 | - |
| 2.0558 | 35000 | 0.0008 | - |
| 2.0587 | 35050 | 0.0007 | - |
| 2.0617 | 35100 | 0.0005 | - |
| 2.0646 | 35150 | 0.0001 | - |
| 2.0675 | 35200 | 0.0019 | - |
| 2.0705 | 35250 | 0.001 | - |
| 2.0734 | 35300 | 0.0048 | - |
| 2.0764 | 35350 | 0.0034 | - |
| 2.0793 | 35400 | 0.0012 | - |
| 2.0822 | 35450 | 0.0001 | - |
| 2.0852 | 35500 | 0.0001 | - |
| 2.0881 | 35550 | 0.0006 | - |
| 2.0910 | 35600 | 0.0013 | - |
| 2.0940 | 35650 | 0.0015 | - |
| 2.0969 | 35700 | 0.0004 | - |
| 2.0999 | 35750 | 0.0008 | - |
| 2.1028 | 35800 | 0.0004 | - |
| 2.1057 | 35850 | 0.0007 | - |
| 2.1087 | 35900 | 0.0001 | - |
| 2.1116 | 35950 | 0.0027 | - |
| 2.1145 | 36000 | 0.0064 | - |
| 2.1175 | 36050 | 0.0014 | - |
| 2.1204 | 36100 | 0.0012 | - |
| 2.1233 | 36150 | 0.0006 | - |
| 2.1263 | 36200 | 0.0008 | - |
| 2.1292 | 36250 | 0.0013 | - |
| 2.1322 | 36300 | 0.0028 | - |
| 2.1351 | 36350 | 0.0022 | - |
| 2.1380 | 36400 | 0.0015 | - |
| 2.1410 | 36450 | 0.0012 | - |
| 2.1439 | 36500 | 0.0012 | - |
| 2.1468 | 36550 | 0.0052 | - |
| 2.1498 | 36600 | 0.0022 | - |
| 2.1527 | 36650 | 0.0022 | - |
| 2.1557 | 36700 | 0.0004 | - |
| 2.1586 | 36750 | 0.0003 | - |
| 2.1615 | 36800 | 0.0003 | - |
| 2.1645 | 36850 | 0.0017 | - |
| 2.1674 | 36900 | 0.0007 | - |
| 2.1703 | 36950 | 0.0009 | - |
| 2.1733 | 37000 | 0.0011 | - |
| 2.1762 | 37050 | 0.0005 | - |
| 2.1791 | 37100 | 0.0022 | - |
| 2.1821 | 37150 | 0.002 | - |
| 2.1850 | 37200 | 0.0038 | - |
| 2.1880 | 37250 | 0.0042 | - |
| 2.1909 | 37300 | 0.0006 | - |
| 2.1938 | 37350 | 0.0005 | - |
| 2.1968 | 37400 | 0.0005 | - |
| 2.1997 | 37450 | 0.0005 | - |
| 2.2026 | 37500 | 0.0036 | - |
| 2.2056 | 37550 | 0.0001 | - |
| 2.2085 | 37600 | 0.0005 | - |
| 2.2115 | 37650 | 0.0011 | - |
| 2.2144 | 37700 | 0.0011 | - |
| 2.2173 | 37750 | 0.0001 | - |
| 2.2203 | 37800 | 0.0005 | - |
| 2.2232 | 37850 | 0.0009 | - |
| 2.2261 | 37900 | 0.0001 | - |
| 2.2291 | 37950 | 0.0001 | - |
| 2.2320 | 38000 | 0.0012 | - |
| 2.2349 | 38050 | 0.0004 | - |
| 2.2379 | 38100 | 0.0021 | - |
| 2.2408 | 38150 | 0.0001 | - |
| 2.2438 | 38200 | 0.0031 | - |
| 2.2467 | 38250 | 0.003 | - |
| 2.2496 | 38300 | 0.003 | - |
| 2.2526 | 38350 | 0.0005 | - |
| 2.2555 | 38400 | 0.0009 | - |
| 2.2584 | 38450 | 0.0001 | - |
| 2.2614 | 38500 | 0.0001 | - |
| 2.2643 | 38550 | 0.0006 | - |
| 2.2673 | 38600 | 0.0005 | - |
| 2.2702 | 38650 | 0.0013 | - |
| 2.2731 | 38700 | 0.0007 | - |
| 2.2761 | 38750 | 0.0008 | - |
| 2.2790 | 38800 | 0.0005 | - |
| 2.2819 | 38850 | 0.0015 | - |
| 2.2849 | 38900 | 0.0009 | - |
| 2.2878 | 38950 | 0.0006 | - |
| 2.2907 | 39000 | 0.0002 | - |
| 2.2937 | 39050 | 0.0001 | - |
| 2.2966 | 39100 | 0.0011 | - |
| 2.2996 | 39150 | 0.0005 | - |
| 2.3025 | 39200 | 0.001 | - |
| 2.3054 | 39250 | 0.0001 | - |
| 2.3084 | 39300 | 0.0013 | - |
| 2.3113 | 39350 | 0.001 | - |
| 2.3142 | 39400 | 0.0044 | - |
| 2.3172 | 39450 | 0.0011 | - |
| 2.3201 | 39500 | 0.0015 | - |
| 2.3231 | 39550 | 0.0005 | - |
| 2.3260 | 39600 | 0.0001 | - |
| 2.3289 | 39650 | 0.0001 | - |
| 2.3319 | 39700 | 0.0004 | - |
| 2.3348 | 39750 | 0.0001 | - |
| 2.3377 | 39800 | 0.001 | - |
| 2.3407 | 39850 | 0.0049 | - |
| 2.3436 | 39900 | 0.0006 | - |
| 2.3465 | 39950 | 0.0028 | - |
| 2.3495 | 40000 | 0.0001 | - |
| 2.3524 | 40050 | 0.0063 | - |
| 2.3554 | 40100 | 0.0001 | - |
| 2.3583 | 40150 | 0.0013 | - |
| 2.3612 | 40200 | 0.0024 | - |
| 2.3642 | 40250 | 0.0028 | - |
| 2.3671 | 40300 | 0.0039 | - |
| 2.3700 | 40350 | 0.0023 | - |
| 2.3730 | 40400 | 0.0009 | - |
| 2.3759 | 40450 | 0.0002 | - |
| 2.3789 | 40500 | 0.0007 | - |
| 2.3818 | 40550 | 0.0004 | - |
| 2.3847 | 40600 | 0.001 | - |
| 2.3877 | 40650 | 0.0001 | - |
| 2.3906 | 40700 | 0.0026 | - |
| 2.3935 | 40750 | 0.0001 | - |
| 2.3965 | 40800 | 0.0003 | - |
| 2.3994 | 40850 | 0.0034 | - |
| 2.4023 | 40900 | 0.0022 | - |
| 2.4053 | 40950 | 0.0006 | - |
| 2.4082 | 41000 | 0.0015 | - |
| 2.4112 | 41050 | 0.0012 | - |
| 2.4141 | 41100 | 0.0006 | - |
| 2.4170 | 41150 | 0.0003 | - |
| 2.4200 | 41200 | 0.0001 | - |
| 2.4229 | 41250 | 0.0013 | - |
| 2.4258 | 41300 | 0.0001 | - |
| 2.4288 | 41350 | 0.0021 | - |
| 2.4317 | 41400 | 0.0001 | - |
| 2.4347 | 41450 | 0.0003 | - |
| 2.4376 | 41500 | 0.001 | - |
| 2.4405 | 41550 | 0.0012 | - |
| 2.4435 | 41600 | 0.0032 | - |
| 2.4464 | 41650 | 0.0036 | - |
| 2.4493 | 41700 | 0.0008 | - |
| 2.4523 | 41750 | 0.0002 | - |
| 2.4552 | 41800 | 0.0032 | - |
| 2.4581 | 41850 | 0.0004 | - |
| 2.4611 | 41900 | 0.0014 | - |
| 2.4640 | 41950 | 0.001 | - |
| 2.4670 | 42000 | 0.0007 | - |
| 2.4699 | 42050 | 0.0011 | - |
| 2.4728 | 42100 | 0.0057 | - |
| 2.4758 | 42150 | 0.0014 | - |
| 2.4787 | 42200 | 0.0011 | - |
| 2.4816 | 42250 | 0.0007 | - |
| 2.4846 | 42300 | 0.0036 | - |
| 2.4875 | 42350 | 0.0001 | - |
| 2.4905 | 42400 | 0.0005 | - |
| 2.4934 | 42450 | 0.0037 | - |
| 2.4963 | 42500 | 0.0005 | - |
| 2.4993 | 42550 | 0.0006 | - |
| 2.5022 | 42600 | 0.0032 | - |
| 2.5051 | 42650 | 0.0001 | - |
| 2.5081 | 42700 | 0.0001 | - |
| 2.5110 | 42750 | 0.001 | - |
| 2.5140 | 42800 | 0.001 | - |
| 2.5169 | 42850 | 0.0007 | - |
| 2.5198 | 42900 | 0.0014 | - |
| 2.5228 | 42950 | 0.0002 | - |
| 2.5257 | 43000 | 0.0001 | - |
| 2.5286 | 43050 | 0.0006 | - |
| 2.5316 | 43100 | 0.0001 | - |
| 2.5345 | 43150 | 0.0001 | - |
| 2.5374 | 43200 | 0.0001 | - |
| 2.5404 | 43250 | 0.0033 | - |
| 2.5433 | 43300 | 0.0021 | - |
| 2.5463 | 43350 | 0.0008 | - |
| 2.5492 | 43400 | 0.0028 | - |
| 2.5521 | 43450 | 0.0001 | - |
| 2.5551 | 43500 | 0.0007 | - |
| 2.5580 | 43550 | 0.0005 | - |
| 2.5609 | 43600 | 0.0014 | - |
| 2.5639 | 43650 | 0.0001 | - |
| 2.5668 | 43700 | 0.0008 | - |
| 2.5698 | 43750 | 0.0006 | - |
| 2.5727 | 43800 | 0.0044 | - |
| 2.5756 | 43850 | 0.0033 | - |
| 2.5786 | 43900 | 0.001 | - |
| 2.5815 | 43950 | 0.0047 | - |
| 2.5844 | 44000 | 0.0005 | - |
| 2.5874 | 44050 | 0.0006 | - |
| 2.5903 | 44100 | 0.0004 | - |
| 2.5932 | 44150 | 0.0027 | - |
| 2.5962 | 44200 | 0.0021 | - |
| 2.5991 | 44250 | 0.0009 | - |
| 2.6021 | 44300 | 0.0013 | - |
| 2.6050 | 44350 | 0.0021 | - |
| 2.6079 | 44400 | 0.0007 | - |
| 2.6109 | 44450 | 0.0018 | - |
| 2.6138 | 44500 | 0.0006 | - |
| 2.6167 | 44550 | 0.0006 | - |
| 2.6197 | 44600 | 0.0015 | - |
| 2.6226 | 44650 | 0.0027 | - |
| 2.6256 | 44700 | 0.0001 | - |
| 2.6285 | 44750 | 0.0005 | - |
| 2.6314 | 44800 | 0.001 | - |
| 2.6344 | 44850 | 0.0029 | - |
| 2.6373 | 44900 | 0.0014 | - |
| 2.6402 | 44950 | 0.0007 | - |
| 2.6432 | 45000 | 0.0011 | - |
| 2.6461 | 45050 | 0.0016 | - |
| 2.6490 | 45100 | 0.0017 | - |
| 2.6520 | 45150 | 0.0001 | - |
| 2.6549 | 45200 | 0.0026 | - |
| 2.6579 | 45250 | 0.0011 | - |
| 2.6608 | 45300 | 0.0063 | - |
| 2.6637 | 45350 | 0.0035 | - |
| 2.6667 | 45400 | 0.0001 | - |
| 2.6696 | 45450 | 0.0001 | - |
| 2.6725 | 45500 | 0.0027 | - |
| 2.6755 | 45550 | 0.0004 | - |
| 2.6784 | 45600 | 0.0041 | - |
| 2.6814 | 45650 | 0.0024 | - |
| 2.6843 | 45700 | 0.0001 | - |
| 2.6872 | 45750 | 0.0037 | - |
| 2.6902 | 45800 | 0.0005 | - |
| 2.6931 | 45850 | 0.0001 | - |
| 2.6960 | 45900 | 0.0015 | - |
| 2.6990 | 45950 | 0.0001 | - |
| 2.7019 | 46000 | 0.0048 | - |
| 2.7048 | 46050 | 0.0007 | - |
| 2.7078 | 46100 | 0.0026 | - |
| 2.7107 | 46150 | 0.0005 | - |
| 2.7137 | 46200 | 0.0009 | - |
| 2.7166 | 46250 | 0.001 | - |
| 2.7195 | 46300 | 0.0001 | - |
| 2.7225 | 46350 | 0.0004 | - |
| 2.7254 | 46400 | 0.0011 | - |
| 2.7283 | 46450 | 0.0018 | - |
| 2.7313 | 46500 | 0.0005 | - |
| 2.7342 | 46550 | 0.0006 | - |
| 2.7372 | 46600 | 0.0001 | - |
| 2.7401 | 46650 | 0.0001 | - |
| 2.7430 | 46700 | 0.0031 | - |
| 2.7460 | 46750 | 0.0001 | - |
| 2.7489 | 46800 | 0.0005 | - |
| 2.7518 | 46850 | 0.0032 | - |
| 2.7548 | 46900 | 0.0006 | - |
| 2.7577 | 46950 | 0.0001 | - |
| 2.7606 | 47000 | 0.0006 | - |
| 2.7636 | 47050 | 0.0004 | - |
| 2.7665 | 47100 | 0.0001 | - |
| 2.7695 | 47150 | 0.0001 | - |
| 2.7724 | 47200 | 0.0001 | - |
| 2.7753 | 47250 | 0.0001 | - |
| 2.7783 | 47300 | 0.0005 | - |
| 2.7812 | 47350 | 0.0005 | - |
| 2.7841 | 47400 | 0.0011 | - |
| 2.7871 | 47450 | 0.0005 | - |
| 2.7900 | 47500 | 0.0001 | - |
| 2.7930 | 47550 | 0.0001 | - |
| 2.7959 | 47600 | 0.0031 | - |
| 2.7988 | 47650 | 0.0006 | - |
| 2.8018 | 47700 | 0.0001 | - |
| 2.8047 | 47750 | 0.0012 | - |
| 2.8076 | 47800 | 0.0005 | - |
| 2.8106 | 47850 | 0.0014 | - |
| 2.8135 | 47900 | 0.0004 | - |
| 2.8164 | 47950 | 0.0007 | - |
| 2.8194 | 48000 | 0.001 | - |
| 2.8223 | 48050 | 0.0006 | - |
| 2.8253 | 48100 | 0.0005 | - |
| 2.8282 | 48150 | 0.0009 | - |
| 2.8311 | 48200 | 0.0024 | - |
| 2.8341 | 48250 | 0.0033 | - |
| 2.8370 | 48300 | 0.0001 | - |
| 2.8399 | 48350 | 0.001 | - |
| 2.8429 | 48400 | 0.0006 | - |
| 2.8458 | 48450 | 0.0004 | - |
| 2.8488 | 48500 | 0.001 | - |
| 2.8517 | 48550 | 0.0011 | - |
| 2.8546 | 48600 | 0.0065 | - |
| 2.8576 | 48650 | 0.0008 | - |
| 2.8605 | 48700 | 0.0029 | - |
| 2.8634 | 48750 | 0.002 | - |
| 2.8664 | 48800 | 0.0001 | - |
| 2.8693 | 48850 | 0.0001 | - |
| 2.8722 | 48900 | 0.0084 | - |
| 2.8752 | 48950 | 0.0025 | - |
| 2.8781 | 49000 | 0.0027 | - |
| 2.8811 | 49050 | 0.0005 | - |
| 2.8840 | 49100 | 0.0011 | - |
| 2.8869 | 49150 | 0.0005 | - |
| 2.8899 | 49200 | 0.0041 | - |
| 2.8928 | 49250 | 0.0015 | - |
| 2.8957 | 49300 | 0.0032 | - |
| 2.8987 | 49350 | 0.0028 | - |
| 2.9016 | 49400 | 0.0004 | - |
| 2.9046 | 49450 | 0.001 | - |
| 2.9075 | 49500 | 0.0001 | - |
| 2.9104 | 49550 | 0.0001 | - |
| 2.9134 | 49600 | 0.0041 | - |
| 2.9163 | 49650 | 0.0005 | - |
| 2.9192 | 49700 | 0.0005 | - |
| 2.9222 | 49750 | 0.0001 | - |
| 2.9251 | 49800 | 0.0005 | - |
| 2.9280 | 49850 | 0.0001 | - |
| 2.9310 | 49900 | 0.0016 | - |
| 2.9339 | 49950 | 0.0013 | - |
| 2.9369 | 50000 | 0.0008 | - |
| 2.9398 | 50050 | 0.0005 | - |
| 2.9427 | 50100 | 0.0005 | - |
| 2.9457 | 50150 | 0.0004 | - |
| 2.9486 | 50200 | 0.0001 | - |
| 2.9515 | 50250 | 0.0006 | - |
| 2.9545 | 50300 | 0.0007 | - |
| 2.9574 | 50350 | 0.0039 | - |
| 2.9604 | 50400 | 0.0014 | - |
| 2.9633 | 50450 | 0.0004 | - |
| 2.9662 | 50500 | 0.0001 | - |
| 2.9692 | 50550 | 0.0007 | - |
| 2.9721 | 50600 | 0.0001 | - |
| 2.9750 | 50650 | 0.0001 | - |
| 2.9780 | 50700 | 0.0024 | - |
| 2.9809 | 50750 | 0.0052 | - |
| 2.9838 | 50800 | 0.0032 | - |
| 2.9868 | 50850 | 0.0011 | - |
| 2.9897 | 50900 | 0.0021 | - |
| 2.9927 | 50950 | 0.0008 | - |
| 2.9956 | 51000 | 0.0004 | - |
| 2.9985 | 51050 | 0.0001 | - |
| 3.0 | 51075 | - | 0.0483 |
| 3.0015 | 51100 | 0.0013 | - |
| 3.0044 | 51150 | 0.0001 | - |
| 3.0073 | 51200 | 0.0004 | - |
| 3.0103 | 51250 | 0.0013 | - |
| 3.0132 | 51300 | 0.0029 | - |
| 3.0162 | 51350 | 0.0001 | - |
| 3.0191 | 51400 | 0.0008 | - |
| 3.0220 | 51450 | 0.0001 | - |
| 3.0250 | 51500 | 0.0001 | - |
| 3.0279 | 51550 | 0.0008 | - |
| 3.0308 | 51600 | 0.0037 | - |
| 3.0338 | 51650 | 0.0033 | - |
| 3.0367 | 51700 | 0.0001 | - |
| 3.0396 | 51750 | 0.0006 | - |
| 3.0426 | 51800 | 0.0007 | - |
| 3.0455 | 51850 | 0.0005 | - |
| 3.0485 | 51900 | 0.0008 | - |
| 3.0514 | 51950 | 0.0006 | - |
| 3.0543 | 52000 | 0.0006 | - |
| 3.0573 | 52050 | 0.0005 | - |
| 3.0602 | 52100 | 0.001 | - |
| 3.0631 | 52150 | 0.0009 | - |
| 3.0661 | 52200 | 0.0009 | - |
| 3.0690 | 52250 | 0.0012 | - |
| 3.0720 | 52300 | 0.0025 | - |
| 3.0749 | 52350 | 0.0001 | - |
| 3.0778 | 52400 | 0.0005 | - |
| 3.0808 | 52450 | 0.0006 | - |
| 3.0837 | 52500 | 0.0011 | - |
| 3.0866 | 52550 | 0.001 | - |
| 3.0896 | 52600 | 0.0011 | - |
| 3.0925 | 52650 | 0.0011 | - |
| 3.0954 | 52700 | 0.0006 | - |
| 3.0984 | 52750 | 0.0009 | - |
| 3.1013 | 52800 | 0.001 | - |
| 3.1043 | 52850 | 0.0008 | - |
| 3.1072 | 52900 | 0.0006 | - |
| 3.1101 | 52950 | 0.0033 | - |
| 3.1131 | 53000 | 0.0005 | - |
| 3.1160 | 53050 | 0.001 | - |
| 3.1189 | 53100 | 0.006 | - |
| 3.1219 | 53150 | 0.0005 | - |
| 3.1248 | 53200 | 0.0023 | - |
| 3.1278 | 53250 | 0.0017 | - |
| 3.1307 | 53300 | 0.0001 | - |
| 3.1336 | 53350 | 0.0025 | - |
| 3.1366 | 53400 | 0.0001 | - |
| 3.1395 | 53450 | 0.0006 | - |
| 3.1424 | 53500 | 0.0014 | - |
| 3.1454 | 53550 | 0.0006 | - |
| 3.1483 | 53600 | 0.0004 | - |
| 3.1512 | 53650 | 0.0026 | - |
| 3.1542 | 53700 | 0.0015 | - |
| 3.1571 | 53750 | 0.001 | - |
| 3.1601 | 53800 | 0.0009 | - |
| 3.1630 | 53850 | 0.001 | - |
| 3.1659 | 53900 | 0.0009 | - |
| 3.1689 | 53950 | 0.0006 | - |
| 3.1718 | 54000 | 0.0005 | - |
| 3.1747 | 54050 | 0.0041 | - |
| 3.1777 | 54100 | 0.0006 | - |
| 3.1806 | 54150 | 0.0008 | - |
| 3.1836 | 54200 | 0.0011 | - |
| 3.1865 | 54250 | 0.0001 | - |
| 3.1894 | 54300 | 0.0005 | - |
| 3.1924 | 54350 | 0.0004 | - |
| 3.1953 | 54400 | 0.0005 | - |
| 3.1982 | 54450 | 0.0056 | - |
| 3.2012 | 54500 | 0.0024 | - |
| 3.2041 | 54550 | 0.0014 | - |
| 3.2070 | 54600 | 0.0016 | - |
| 3.2100 | 54650 | 0.0009 | - |
| 3.2129 | 54700 | 0.0005 | - |
| 3.2159 | 54750 | 0.0004 | - |
| 3.2188 | 54800 | 0.0006 | - |
| 3.2217 | 54850 | 0.0008 | - |
| 3.2247 | 54900 | 0.0006 | - |
| 3.2276 | 54950 | 0.001 | - |
| 3.2305 | 55000 | 0.0004 | - |
| 3.2335 | 55050 | 0.001 | - |
| 3.2364 | 55100 | 0.0008 | - |
| 3.2394 | 55150 | 0.0044 | - |
| 3.2423 | 55200 | 0.0029 | - |
| 3.2452 | 55250 | 0.0001 | - |
| 3.2482 | 55300 | 0.0012 | - |
| 3.2511 | 55350 | 0.0006 | - |
| 3.2540 | 55400 | 0.0001 | - |
| 3.2570 | 55450 | 0.0011 | - |
| 3.2599 | 55500 | 0.0005 | - |
| 3.2628 | 55550 | 0.0005 | - |
| 3.2658 | 55600 | 0.0001 | - |
| 3.2687 | 55650 | 0.0001 | - |
| 3.2717 | 55700 | 0.0008 | - |
| 3.2746 | 55750 | 0.001 | - |
| 3.2775 | 55800 | 0.0009 | - |
| 3.2805 | 55850 | 0.003 | - |
| 3.2834 | 55900 | 0.0019 | - |
| 3.2863 | 55950 | 0.0031 | - |
| 3.2893 | 56000 | 0.0012 | - |
| 3.2922 | 56050 | 0.0055 | - |
| 3.2952 | 56100 | 0.0007 | - |
| 3.2981 | 56150 | 0.0036 | - |
| 3.3010 | 56200 | 0.0001 | - |
| 3.3040 | 56250 | 0.0013 | - |
| 3.3069 | 56300 | 0.0004 | - |
| 3.3098 | 56350 | 0.0033 | - |
| 3.3128 | 56400 | 0.0001 | - |
| 3.3157 | 56450 | 0.0 | - |
| 3.3186 | 56500 | 0.0025 | - |
| 3.3216 | 56550 | 0.0021 | - |
| 3.3245 | 56600 | 0.0026 | - |
| 3.3275 | 56650 | 0.0011 | - |
| 3.3304 | 56700 | 0.0035 | - |
| 3.3333 | 56750 | 0.0006 | - |
| 3.3363 | 56800 | 0.0045 | - |
| 3.3392 | 56850 | 0.0006 | - |
| 3.3421 | 56900 | 0.0005 | - |
| 3.3451 | 56950 | 0.0007 | - |
| 3.3480 | 57000 | 0.0001 | - |
| 3.3510 | 57050 | 0.0039 | - |
| 3.3539 | 57100 | 0.0011 | - |
| 3.3568 | 57150 | 0.0006 | - |
| 3.3598 | 57200 | 0.0009 | - |
| 3.3627 | 57250 | 0.0009 | - |
| 3.3656 | 57300 | 0.0012 | - |
| 3.3686 | 57350 | 0.0013 | - |
| 3.3715 | 57400 | 0.0006 | - |
| 3.3744 | 57450 | 0.0001 | - |
| 3.3774 | 57500 | 0.0031 | - |
| 3.3803 | 57550 | 0.0004 | - |
| 3.3833 | 57600 | 0.0056 | - |
| 3.3862 | 57650 | 0.001 | - |
| 3.3891 | 57700 | 0.0 | - |
| 3.3921 | 57750 | 0.0006 | - |
| 3.3950 | 57800 | 0.0017 | - |
| 3.3979 | 57850 | 0.0014 | - |
| 3.4009 | 57900 | 0.0031 | - |
| 3.4038 | 57950 | 0.0001 | - |
| 3.4068 | 58000 | 0.0004 | - |
| 3.4097 | 58050 | 0.0001 | - |
| 3.4126 | 58100 | 0.0001 | - |
| 3.4156 | 58150 | 0.0007 | - |
| 3.4185 | 58200 | 0.0015 | - |
| 3.4214 | 58250 | 0.0015 | - |
| 3.4244 | 58300 | 0.0008 | - |
| 3.4273 | 58350 | 0.0007 | - |
| 3.4302 | 58400 | 0.0014 | - |
| 3.4332 | 58450 | 0.0006 | - |
| 3.4361 | 58500 | 0.0017 | - |
| 3.4391 | 58550 | 0.0005 | - |
| 3.4420 | 58600 | 0.0033 | - |
| 3.4449 | 58650 | 0.0001 | - |
| 3.4479 | 58700 | 0.0027 | - |
| 3.4508 | 58750 | 0.0009 | - |
| 3.4537 | 58800 | 0.0001 | - |
| 3.4567 | 58850 | 0.0001 | - |
| 3.4596 | 58900 | 0.0 | - |
| 3.4626 | 58950 | 0.0001 | - |
| 3.4655 | 59000 | 0.0 | - |
| 3.4684 | 59050 | 0.0003 | - |
| 3.4714 | 59100 | 0.0009 | - |
| 3.4743 | 59150 | 0.0001 | - |
| 3.4772 | 59200 | 0.0007 | - |
| 3.4802 | 59250 | 0.0027 | - |
| 3.4831 | 59300 | 0.001 | - |
| 3.4860 | 59350 | 0.0 | - |
| 3.4890 | 59400 | 0.0013 | - |
| 3.4919 | 59450 | 0.0001 | - |
| 3.4949 | 59500 | 0.0015 | - |
| 3.4978 | 59550 | 0.0008 | - |
| 3.5007 | 59600 | 0.0029 | - |
| 3.5037 | 59650 | 0.0 | - |
| 3.5066 | 59700 | 0.0 | - |
| 3.5095 | 59750 | 0.0008 | - |
| 3.5125 | 59800 | 0.0004 | - |
| 3.5154 | 59850 | 0.0 | - |
| 3.5184 | 59900 | 0.003 | - |
| 3.5213 | 59950 | 0.0014 | - |
| 3.5242 | 60000 | 0.001 | - |
| 3.5272 | 60050 | 0.0001 | - |
| 3.5301 | 60100 | 0.0015 | - |
| 3.5330 | 60150 | 0.0005 | - |
| 3.5360 | 60200 | 0.0007 | - |
| 3.5389 | 60250 | 0.0007 | - |
| 3.5419 | 60300 | 0.0026 | - |
| 3.5448 | 60350 | 0.0001 | - |
| 3.5477 | 60400 | 0.0008 | - |
| 3.5507 | 60450 | 0.0028 | - |
| 3.5536 | 60500 | 0.0004 | - |
| 3.5565 | 60550 | 0.0007 | - |
| 3.5595 | 60600 | 0.0013 | - |
| 3.5624 | 60650 | 0.0 | - |
| 3.5653 | 60700 | 0.0008 | - |
| 3.5683 | 60750 | 0.0007 | - |
| 3.5712 | 60800 | 0.0023 | - |
| 3.5742 | 60850 | 0.0004 | - |
| 3.5771 | 60900 | 0.0012 | - |
| 3.5800 | 60950 | 0.0027 | - |
| 3.5830 | 61000 | 0.0008 | - |
| 3.5859 | 61050 | 0.0009 | - |
| 3.5888 | 61100 | 0.0007 | - |
| 3.5918 | 61150 | 0.0017 | - |
| 3.5947 | 61200 | 0.0001 | - |
| 3.5977 | 61250 | 0.003 | - |
| 3.6006 | 61300 | 0.0044 | - |
| 3.6035 | 61350 | 0.0009 | - |
| 3.6065 | 61400 | 0.0043 | - |
| 3.6094 | 61450 | 0.0025 | - |
| 3.6123 | 61500 | 0.0032 | - |
| 3.6153 | 61550 | 0.0001 | - |
| 3.6182 | 61600 | 0.0004 | - |
| 3.6211 | 61650 | 0.0051 | - |
| 3.6241 | 61700 | 0.0031 | - |
| 3.6270 | 61750 | 0.0 | - |
| 3.6300 | 61800 | 0.0001 | - |
| 3.6329 | 61850 | 0.0001 | - |
| 3.6358 | 61900 | 0.0005 | - |
| 3.6388 | 61950 | 0.0 | - |
| 3.6417 | 62000 | 0.0011 | - |
| 3.6446 | 62050 | 0.0006 | - |
| 3.6476 | 62100 | 0.0008 | - |
| 3.6505 | 62150 | 0.0006 | - |
| 3.6535 | 62200 | 0.0006 | - |
| 3.6564 | 62250 | 0.0 | - |
| 3.6593 | 62300 | 0.001 | - |
| 3.6623 | 62350 | 0.0028 | - |
| 3.6652 | 62400 | 0.0001 | - |
| 3.6681 | 62450 | 0.0004 | - |
| 3.6711 | 62500 | 0.0044 | - |
| 3.6740 | 62550 | 0.0015 | - |
| 3.6769 | 62600 | 0.0001 | - |
| 3.6799 | 62650 | 0.0006 | - |
| 3.6828 | 62700 | 0.0046 | - |
| 3.6858 | 62750 | 0.0005 | - |
| 3.6887 | 62800 | 0.0 | - |
| 3.6916 | 62850 | 0.0005 | - |
| 3.6946 | 62900 | 0.001 | - |
| 3.6975 | 62950 | 0.0006 | - |
| 3.7004 | 63000 | 0.0026 | - |
| 3.7034 | 63050 | 0.0 | - |
| 3.7063 | 63100 | 0.0001 | - |
| 3.7093 | 63150 | 0.0001 | - |
| 3.7122 | 63200 | 0.0001 | - |
| 3.7151 | 63250 | 0.0038 | - |
| 3.7181 | 63300 | 0.0001 | - |
| 3.7210 | 63350 | 0.0029 | - |
| 3.7239 | 63400 | 0.0006 | - |
| 3.7269 | 63450 | 0.0035 | - |
| 3.7298 | 63500 | 0.0005 | - |
| 3.7327 | 63550 | 0.0 | - |
| 3.7357 | 63600 | 0.0009 | - |
| 3.7386 | 63650 | 0.0014 | - |
| 3.7416 | 63700 | 0.003 | - |
| 3.7445 | 63750 | 0.0001 | - |
| 3.7474 | 63800 | 0.0 | - |
| 3.7504 | 63850 | 0.0001 | - |
| 3.7533 | 63900 | 0.001 | - |
| 3.7562 | 63950 | 0.0045 | - |
| 3.7592 | 64000 | 0.004 | - |
| 3.7621 | 64050 | 0.0005 | - |
| 3.7651 | 64100 | 0.001 | - |
| 3.7680 | 64150 | 0.0005 | - |
| 3.7709 | 64200 | 0.0009 | - |
| 3.7739 | 64250 | 0.0001 | - |
| 3.7768 | 64300 | 0.0001 | - |
| 3.7797 | 64350 | 0.0015 | - |
| 3.7827 | 64400 | 0.0005 | - |
| 3.7856 | 64450 | 0.001 | - |
| 3.7885 | 64500 | 0.0001 | - |
| 3.7915 | 64550 | 0.001 | - |
| 3.7944 | 64600 | 0.0005 | - |
| 3.7974 | 64650 | 0.0001 | - |
| 3.8003 | 64700 | 0.001 | - |
| 3.8032 | 64750 | 0.0001 | - |
| 3.8062 | 64800 | 0.001 | - |
| 3.8091 | 64850 | 0.003 | - |
| 3.8120 | 64900 | 0.0027 | - |
| 3.8150 | 64950 | 0.0014 | - |
| 3.8179 | 65000 | 0.0005 | - |
| 3.8209 | 65050 | 0.0005 | - |
| 3.8238 | 65100 | 0.0009 | - |
| 3.8267 | 65150 | 0.0 | - |
| 3.8297 | 65200 | 0.0034 | - |
| 3.8326 | 65250 | 0.0005 | - |
| 3.8355 | 65300 | 0.0036 | - |
| 3.8385 | 65350 | 0.0014 | - |
| 3.8414 | 65400 | 0.0005 | - |
| 3.8443 | 65450 | 0.0004 | - |
| 3.8473 | 65500 | 0.0011 | - |
| 3.8502 | 65550 | 0.0001 | - |
| 3.8532 | 65600 | 0.0035 | - |
| 3.8561 | 65650 | 0.0006 | - |
| 3.8590 | 65700 | 0.0 | - |
| 3.8620 | 65750 | 0.0005 | - |
| 3.8649 | 65800 | 0.003 | - |
| 3.8678 | 65850 | 0.0005 | - |
| 3.8708 | 65900 | 0.0029 | - |
| 3.8737 | 65950 | 0.0011 | - |
| 3.8767 | 66000 | 0.0005 | - |
| 3.8796 | 66050 | 0.0005 | - |
| 3.8825 | 66100 | 0.001 | - |
| 3.8855 | 66150 | 0.0001 | - |
| 3.8884 | 66200 | 0.0001 | - |
| 3.8913 | 66250 | 0.0025 | - |
| 3.8943 | 66300 | 0.001 | - |
| 3.8972 | 66350 | 0.0036 | - |
| 3.9001 | 66400 | 0.003 | - |
| 3.9031 | 66450 | 0.0019 | - |
| 3.9060 | 66500 | 0.003 | - |
| 3.9090 | 66550 | 0.0006 | - |
| 3.9119 | 66600 | 0.0 | - |
| 3.9148 | 66650 | 0.0 | - |
| 3.9178 | 66700 | 0.0018 | - |
| 3.9207 | 66750 | 0.0005 | - |
| 3.9236 | 66800 | 0.001 | - |
| 3.9266 | 66850 | 0.0001 | - |
| 3.9295 | 66900 | 0.0005 | - |
| 3.9325 | 66950 | 0.0032 | - |
| 3.9354 | 67000 | 0.0033 | - |
| 3.9383 | 67050 | 0.001 | - |
| 3.9413 | 67100 | 0.0014 | - |
| 3.9442 | 67150 | 0.0 | - |
| 3.9471 | 67200 | 0.0005 | - |
| 3.9501 | 67250 | 0.0005 | - |
| 3.9530 | 67300 | 0.0035 | - |
| 3.9559 | 67350 | 0.0001 | - |
| 3.9589 | 67400 | 0.0015 | - |
| 3.9618 | 67450 | 0.001 | - |
| 3.9648 | 67500 | 0.0031 | - |
| 3.9677 | 67550 | 0.0001 | - |
| 3.9706 | 67600 | 0.0001 | - |
| 3.9736 | 67650 | 0.001 | - |
| 3.9765 | 67700 | 0.0059 | - |
| 3.9794 | 67750 | 0.0005 | - |
| 3.9824 | 67800 | 0.0005 | - |
| 3.9853 | 67850 | 0.0028 | - |
| 3.9883 | 67900 | 0.0015 | - |
| 3.9912 | 67950 | 0.0029 | - |
| 3.9941 | 68000 | 0.0033 | - |
| 3.9971 | 68050 | 0.001 | - |
| 4.0 | 68100 | 0.0005 | 0.0464 |
### Framework Versions
- Python: 3.11.11
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| greet-who_are_you | <ul><li>'can you answer any question?'</li><li>'how smart are you?'</li><li>'are you a chatbot?'</li></ul> |
| general-questions | <ul><li>'how does adaptive learning personalize my experience'</li><li>'what is the significance of feedback in online learning'</li><li>'what is the role of certifications in career development'</li></ul> |
| feedback | <ul><li>'Excellent design and usability!'</li><li>'The course variety is amazing!'</li><li>'I experienced technical issues during live sessions.'</li></ul> |
| greet-good_bye | <ul><li>'gotta go'</li><li>'catch you later'</li><li>'take it easy'</li></ul> |
| greet-hi | <ul><li>'Coucou!'</li><li>'Salutations!'</li><li>'Salut'</li></ul> |
| recommendations | <ul><li>'what courses help with productivity skills'</li><li>'can you recommend some quick short-term courses'</li><li>'can you recommend some career-oriented courses'</li></ul> |
| website-information | <ul><li>'how to enable or disable notifications'</li><li>'where can I update my payment information'</li><li>'how to contact customer service'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9048 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("HussienAhmad/SFT_GradProject")
# Run inference
preds = model("au revoir")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 5.1950 | 11 |
| Label | Training Sample Count |
|:--------------------|:----------------------|
| greet-hi | 39 |
| greet-who_are_you | 37 |
| greet-good_bye | 38 |
| general-questions | 43 |
| recommendations | 45 |
| website-information | 43 |
| feedback | 37 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.552 | - |
| 0.0029 | 50 | 0.2704 | - |
| 0.0059 | 100 | 0.2701 | - |
| 0.0088 | 150 | 0.2926 | - |
| 0.0117 | 200 | 0.2647 | - |
| 0.0147 | 250 | 0.2741 | - |
| 0.0176 | 300 | 0.2343 | - |
| 0.0206 | 350 | 0.2427 | - |
| 0.0235 | 400 | 0.241 | - |
| 0.0264 | 450 | 0.2624 | - |
| 0.0294 | 500 | 0.2097 | - |
| 0.0323 | 550 | 0.2112 | - |
| 0.0352 | 600 | 0.1741 | - |
| 0.0382 | 650 | 0.1871 | - |
| 0.0411 | 700 | 0.1554 | - |
| 0.0441 | 750 | 0.175 | - |
| 0.0470 | 800 | 0.1497 | - |
| 0.0499 | 850 | 0.1465 | - |
| 0.0529 | 900 | 0.1362 | - |
| 0.0558 | 950 | 0.1324 | - |
| 0.0587 | 1000 | 0.1349 | - |
| 0.0617 | 1050 | 0.1222 | - |
| 0.0646 | 1100 | 0.1129 | - |
| 0.0675 | 1150 | 0.1072 | - |
| 0.0705 | 1200 | 0.105 | - |
| 0.0734 | 1250 | 0.0929 | - |
| 0.0764 | 1300 | 0.0975 | - |
| 0.0793 | 1350 | 0.0849 | - |
| 0.0822 | 1400 | 0.0764 | - |
| 0.0852 | 1450 | 0.0579 | - |
| 0.0881 | 1500 | 0.0719 | - |
| 0.0910 | 1550 | 0.0567 | - |
| 0.0940 | 1600 | 0.0431 | - |
| 0.0969 | 1650 | 0.0564 | - |
| 0.0999 | 1700 | 0.0439 | - |
| 0.1028 | 1750 | 0.045 | - |
| 0.1057 | 1800 | 0.0584 | - |
| 0.1087 | 1850 | 0.0315 | - |
| 0.1116 | 1900 | 0.045 | - |
| 0.1145 | 1950 | 0.0247 | - |
| 0.1175 | 2000 | 0.0329 | - |
| 0.1204 | 2050 | 0.0261 | - |
| 0.1233 | 2100 | 0.0332 | - |
| 0.1263 | 2150 | 0.0319 | - |
| 0.1292 | 2200 | 0.0195 | - |
| 0.1322 | 2250 | 0.0344 | - |
| 0.1351 | 2300 | 0.0147 | - |
| 0.1380 | 2350 | 0.0157 | - |
| 0.1410 | 2400 | 0.0147 | - |
| 0.1439 | 2450 | 0.0186 | - |
| 0.1468 | 2500 | 0.0099 | - |
| 0.1498 | 2550 | 0.0146 | - |
| 0.1527 | 2600 | 0.0103 | - |
| 0.1557 | 2650 | 0.0221 | - |
| 0.1586 | 2700 | 0.0135 | - |
| 0.1615 | 2750 | 0.0145 | - |
| 0.1645 | 2800 | 0.0169 | - |
| 0.1674 | 2850 | 0.0133 | - |
| 0.1703 | 2900 | 0.0109 | - |
| 0.1733 | 2950 | 0.0096 | - |
| 0.1762 | 3000 | 0.0197 | - |
| 0.1791 | 3050 | 0.0086 | - |
| 0.1821 | 3100 | 0.0074 | - |
| 0.1850 | 3150 | 0.008 | - |
| 0.1880 | 3200 | 0.0077 | - |
| 0.1909 | 3250 | 0.0033 | - |
| 0.1938 | 3300 | 0.004 | - |
| 0.1968 | 3350 | 0.0029 | - |
| 0.1997 | 3400 | 0.0032 | - |
| 0.2026 | 3450 | 0.0123 | - |
| 0.2056 | 3500 | 0.0092 | - |
| 0.2085 | 3550 | 0.0028 | - |
| 0.2115 | 3600 | 0.005 | - |
| 0.2144 | 3650 | 0.0042 | - |
| 0.2173 | 3700 | 0.004 | - |
| 0.2203 | 3750 | 0.003 | - |
| 0.2232 | 3800 | 0.0093 | - |
| 0.2261 | 3850 | 0.0016 | - |
| 0.2291 | 3900 | 0.0075 | - |
| 0.2320 | 3950 | 0.0018 | - |
| 0.2349 | 4000 | 0.0019 | - |
| 0.2379 | 4050 | 0.0013 | - |
| 0.2408 | 4100 | 0.0054 | - |
| 0.2438 | 4150 | 0.0023 | - |
| 0.2467 | 4200 | 0.0015 | - |
| 0.2496 | 4250 | 0.006 | - |
| 0.2526 | 4300 | 0.0093 | - |
| 0.2555 | 4350 | 0.0026 | - |
| 0.2584 | 4400 | 0.0009 | - |
| 0.2614 | 4450 | 0.0055 | - |
| 0.2643 | 4500 | 0.0032 | - |
| 0.2673 | 4550 | 0.0043 | - |
| 0.2702 | 4600 | 0.005 | - |
| 0.2731 | 4650 | 0.0043 | - |
| 0.2761 | 4700 | 0.001 | - |
| 0.2790 | 4750 | 0.0081 | - |
| 0.2819 | 4800 | 0.0039 | - |
| 0.2849 | 4850 | 0.0037 | - |
| 0.2878 | 4900 | 0.0029 | - |
| 0.2907 | 4950 | 0.0019 | - |
| 0.2937 | 5000 | 0.0085 | - |
| 0.2966 | 5050 | 0.0024 | - |
| 0.2996 | 5100 | 0.0008 | - |
| 0.3025 | 5150 | 0.0006 | - |
| 0.3054 | 5200 | 0.0008 | - |
| 0.3084 | 5250 | 0.0047 | - |
| 0.3113 | 5300 | 0.0045 | - |
| 0.3142 | 5350 | 0.0051 | - |
| 0.3172 | 5400 | 0.0019 | - |
| 0.3201 | 5450 | 0.0005 | - |
| 0.3231 | 5500 | 0.0052 | - |
| 0.3260 | 5550 | 0.0095 | - |
| 0.3289 | 5600 | 0.0027 | - |
| 0.3319 | 5650 | 0.0089 | - |
| 0.3348 | 5700 | 0.0014 | - |
| 0.3377 | 5750 | 0.0005 | - |
| 0.3407 | 5800 | 0.0092 | - |
| 0.3436 | 5850 | 0.0005 | - |
| 0.3465 | 5900 | 0.0054 | - |
| 0.3495 | 5950 | 0.0072 | - |
| 0.3524 | 6000 | 0.012 | - |
| 0.3554 | 6050 | 0.0029 | - |
| 0.3583 | 6100 | 0.0016 | - |
| 0.3612 | 6150 | 0.0005 | - |
| 0.3642 | 6200 | 0.0006 | - |
| 0.3671 | 6250 | 0.0061 | - |
| 0.3700 | 6300 | 0.0007 | - |
| 0.3730 | 6350 | 0.0039 | - |
| 0.3759 | 6400 | 0.0061 | - |
| 0.3789 | 6450 | 0.004 | - |
| 0.3818 | 6500 | 0.0005 | - |
| 0.3847 | 6550 | 0.0027 | - |
| 0.3877 | 6600 | 0.0007 | - |
| 0.3906 | 6650 | 0.0051 | - |
| 0.3935 | 6700 | 0.0005 | - |
| 0.3965 | 6750 | 0.0049 | - |
| 0.3994 | 6800 | 0.007 | - |
| 0.4023 | 6850 | 0.0022 | - |
| 0.4053 | 6900 | 0.0033 | - |
| 0.4082 | 6950 | 0.0027 | - |
| 0.4112 | 7000 | 0.0004 | - |
| 0.4141 | 7050 | 0.0049 | - |
| 0.4170 | 7100 | 0.0031 | - |
| 0.4200 | 7150 | 0.0014 | - |
| 0.4229 | 7200 | 0.0007 | - |
| 0.4258 | 7250 | 0.0021 | - |
| 0.4288 | 7300 | 0.002 | - |
| 0.4317 | 7350 | 0.0015 | - |
| 0.4347 | 7400 | 0.0004 | - |
| 0.4376 | 7450 | 0.0028 | - |
| 0.4405 | 7500 | 0.0073 | - |
| 0.4435 | 7550 | 0.0006 | - |
| 0.4464 | 7600 | 0.0061 | - |
| 0.4493 | 7650 | 0.0062 | - |
| 0.4523 | 7700 | 0.0017 | - |
| 0.4552 | 7750 | 0.0012 | - |
| 0.4581 | 7800 | 0.0017 | - |
| 0.4611 | 7850 | 0.0013 | - |
| 0.4640 | 7900 | 0.0004 | - |
| 0.4670 | 7950 | 0.0013 | - |
| 0.4699 | 8000 | 0.0004 | - |
| 0.4728 | 8050 | 0.0021 | - |
| 0.4758 | 8100 | 0.0011 | - |
| 0.4787 | 8150 | 0.0012 | - |
| 0.4816 | 8200 | 0.0023 | - |
| 0.4846 | 8250 | 0.0027 | - |
| 0.4875 | 8300 | 0.001 | - |
| 0.4905 | 8350 | 0.0047 | - |
| 0.4934 | 8400 | 0.0011 | - |
| 0.4963 | 8450 | 0.0005 | - |
| 0.4993 | 8500 | 0.0008 | - |
| 0.5022 | 8550 | 0.0018 | - |
| 0.5051 | 8600 | 0.0003 | - |
| 0.5081 | 8650 | 0.0005 | - |
| 0.5110 | 8700 | 0.0018 | - |
| 0.5140 | 8750 | 0.0024 | - |
| 0.5169 | 8800 | 0.0006 | - |
| 0.5198 | 8850 | 0.001 | - |
| 0.5228 | 8900 | 0.0015 | - |
| 0.5257 | 8950 | 0.0003 | - |
| 0.5286 | 9000 | 0.0007 | - |
| 0.5316 | 9050 | 0.0003 | - |
| 0.5345 | 9100 | 0.0024 | - |
| 0.5374 | 9150 | 0.0005 | - |
| 0.5404 | 9200 | 0.0012 | - |
| 0.5433 | 9250 | 0.0152 | - |
| 0.5463 | 9300 | 0.0012 | - |
| 0.5492 | 9350 | 0.0023 | - |
| 0.5521 | 9400 | 0.0013 | - |
| 0.5551 | 9450 | 0.0006 | - |
| 0.5580 | 9500 | 0.0024 | - |
| 0.5609 | 9550 | 0.0026 | - |
| 0.5639 | 9600 | 0.0006 | - |
| 0.5668 | 9650 | 0.0021 | - |
| 0.5698 | 9700 | 0.0008 | - |
| 0.5727 | 9750 | 0.0003 | - |
| 0.5756 | 9800 | 0.0016 | - |
| 0.5786 | 9850 | 0.0004 | - |
| 0.5815 | 9900 | 0.0018 | - |
| 0.5844 | 9950 | 0.0039 | - |
| 0.5874 | 10000 | 0.002 | - |
| 0.5903 | 10050 | 0.001 | - |
| 0.5932 | 10100 | 0.0005 | - |
| 0.5962 | 10150 | 0.0034 | - |
| 0.5991 | 10200 | 0.0003 | - |
| 0.6021 | 10250 | 0.0002 | - |
| 0.6050 | 10300 | 0.0022 | - |
| 0.6079 | 10350 | 0.0016 | - |
| 0.6109 | 10400 | 0.0015 | - |
| 0.6138 | 10450 | 0.0016 | - |
| 0.6167 | 10500 | 0.0023 | - |
| 0.6197 | 10550 | 0.0004 | - |
| 0.6226 | 10600 | 0.0022 | - |
| 0.6256 | 10650 | 0.0018 | - |
| 0.6285 | 10700 | 0.0003 | - |
| 0.6314 | 10750 | 0.0002 | - |
| 0.6344 | 10800 | 0.0009 | - |
| 0.6373 | 10850 | 0.0003 | - |
| 0.6402 | 10900 | 0.0016 | - |
| 0.6432 | 10950 | 0.0035 | - |
| 0.6461 | 11000 | 0.0012 | - |
| 0.6490 | 11050 | 0.0013 | - |
| 0.6520 | 11100 | 0.0036 | - |
| 0.6549 | 11150 | 0.0003 | - |
| 0.6579 | 11200 | 0.0008 | - |
| 0.6608 | 11250 | 0.0006 | - |
| 0.6637 | 11300 | 0.0005 | - |
| 0.6667 | 11350 | 0.0003 | - |
| 0.6696 | 11400 | 0.0039 | - |
| 0.6725 | 11450 | 0.0061 | - |
| 0.6755 | 11500 | 0.0003 | - |
| 0.6784 | 11550 | 0.0002 | - |
| 0.6814 | 11600 | 0.002 | - |
| 0.6843 | 11650 | 0.0045 | - |
| 0.6872 | 11700 | 0.0003 | - |
| 0.6902 | 11750 | 0.0003 | - |
| 0.6931 | 11800 | 0.0019 | - |
| 0.6960 | 11850 | 0.0042 | - |
| 0.6990 | 11900 | 0.0009 | - |
| 0.7019 | 11950 | 0.0034 | - |
| 0.7048 | 12000 | 0.0009 | - |
| 0.7078 | 12050 | 0.0018 | - |
| 0.7107 | 12100 | 0.0013 | - |
| 0.7137 | 12150 | 0.0007 | - |
| 0.7166 | 12200 | 0.0002 | - |
| 0.7195 | 12250 | 0.0012 | - |
| 0.7225 | 12300 | 0.0024 | - |
| 0.7254 | 12350 | 0.0007 | - |
| 0.7283 | 12400 | 0.0001 | - |
| 0.7313 | 12450 | 0.0002 | - |
| 0.7342 | 12500 | 0.0002 | - |
| 0.7372 | 12550 | 0.0021 | - |
| 0.7401 | 12600 | 0.0017 | - |
| 0.7430 | 12650 | 0.0004 | - |
| 0.7460 | 12700 | 0.0007 | - |
| 0.7489 | 12750 | 0.0002 | - |
| 0.7518 | 12800 | 0.0033 | - |
| 0.7548 | 12850 | 0.0004 | - |
| 0.7577 | 12900 | 0.0011 | - |
| 0.7606 | 12950 | 0.0039 | - |
| 0.7636 | 13000 | 0.0005 | - |
| 0.7665 | 13050 | 0.0009 | - |
| 0.7695 | 13100 | 0.0021 | - |
| 0.7724 | 13150 | 0.0035 | - |
| 0.7753 | 13200 | 0.0036 | - |
| 0.7783 | 13250 | 0.0005 | - |
| 0.7812 | 13300 | 0.0002 | - |
| 0.7841 | 13350 | 0.0006 | - |
| 0.7871 | 13400 | 0.0002 | - |
| 0.7900 | 13450 | 0.0002 | - |
| 0.7930 | 13500 | 0.004 | - |
| 0.7959 | 13550 | 0.0008 | - |
| 0.7988 | 13600 | 0.0009 | - |
| 0.8018 | 13650 | 0.0032 | - |
| 0.8047 | 13700 | 0.001 | - |
| 0.8076 | 13750 | 0.0013 | - |
| 0.8106 | 13800 | 0.0021 | - |
| 0.8135 | 13850 | 0.0006 | - |
| 0.8164 | 13900 | 0.0023 | - |
| 0.8194 | 13950 | 0.0004 | - |
| 0.8223 | 14000 | 0.0002 | - |
| 0.8253 | 14050 | 0.0051 | - |
| 0.8282 | 14100 | 0.0014 | - |
| 0.8311 | 14150 | 0.0008 | - |
| 0.8341 | 14200 | 0.0027 | - |
| 0.8370 | 14250 | 0.0013 | - |
| 0.8399 | 14300 | 0.0017 | - |
| 0.8429 | 14350 | 0.0015 | - |
| 0.8458 | 14400 | 0.0002 | - |
| 0.8488 | 14450 | 0.0002 | - |
| 0.8517 | 14500 | 0.0033 | - |
| 0.8546 | 14550 | 0.0005 | - |
| 0.8576 | 14600 | 0.0005 | - |
| 0.8605 | 14650 | 0.0047 | - |
| 0.8634 | 14700 | 0.0012 | - |
| 0.8664 | 14750 | 0.0013 | - |
| 0.8693 | 14800 | 0.0034 | - |
| 0.8722 | 14850 | 0.0009 | - |
| 0.8752 | 14900 | 0.0054 | - |
| 0.8781 | 14950 | 0.0006 | - |
| 0.8811 | 15000 | 0.0003 | - |
| 0.8840 | 15050 | 0.002 | - |
| 0.8869 | 15100 | 0.0017 | - |
| 0.8899 | 15150 | 0.0002 | - |
| 0.8928 | 15200 | 0.0043 | - |
| 0.8957 | 15250 | 0.0012 | - |
| 0.8987 | 15300 | 0.0005 | - |
| 0.9016 | 15350 | 0.0012 | - |
| 0.9046 | 15400 | 0.0007 | - |
| 0.9075 | 15450 | 0.0002 | - |
| 0.9104 | 15500 | 0.0027 | - |
| 0.9134 | 15550 | 0.0002 | - |
| 0.9163 | 15600 | 0.0012 | - |
| 0.9192 | 15650 | 0.0039 | - |
| 0.9222 | 15700 | 0.001 | - |
| 0.9251 | 15750 | 0.0038 | - |
| 0.9280 | 15800 | 0.0019 | - |
| 0.9310 | 15850 | 0.0028 | - |
| 0.9339 | 15900 | 0.0001 | - |
| 0.9369 | 15950 | 0.0011 | - |
| 0.9398 | 16000 | 0.001 | - |
| 0.9427 | 16050 | 0.0004 | - |
| 0.9457 | 16100 | 0.0012 | - |
| 0.9486 | 16150 | 0.0019 | - |
| 0.9515 | 16200 | 0.0035 | - |
| 0.9545 | 16250 | 0.005 | - |
| 0.9574 | 16300 | 0.001 | - |
| 0.9604 | 16350 | 0.0021 | - |
| 0.9633 | 16400 | 0.0002 | - |
| 0.9662 | 16450 | 0.0013 | - |
| 0.9692 | 16500 | 0.0002 | - |
| 0.9721 | 16550 | 0.0005 | - |
| 0.9750 | 16600 | 0.0012 | - |
| 0.9780 | 16650 | 0.0009 | - |
| 0.9809 | 16700 | 0.0012 | - |
| 0.9838 | 16750 | 0.0028 | - |
| 0.9868 | 16800 | 0.0046 | - |
| 0.9897 | 16850 | 0.0029 | - |
| 0.9927 | 16900 | 0.0001 | - |
| 0.9956 | 16950 | 0.0043 | - |
| 0.9985 | 17000 | 0.0038 | - |
| 1.0 | 17025 | - | 0.0476 |
| 1.0015 | 17050 | 0.0007 | - |
| 1.0044 | 17100 | 0.0001 | - |
| 1.0073 | 17150 | 0.0001 | - |
| 1.0103 | 17200 | 0.0011 | - |
| 1.0132 | 17250 | 0.0023 | - |
| 1.0162 | 17300 | 0.0013 | - |
| 1.0191 | 17350 | 0.0007 | - |
| 1.0220 | 17400 | 0.0004 | - |
| 1.0250 | 17450 | 0.0001 | - |
| 1.0279 | 17500 | 0.0021 | - |
| 1.0308 | 17550 | 0.0001 | - |
| 1.0338 | 17600 | 0.0005 | - |
| 1.0367 | 17650 | 0.0009 | - |
| 1.0396 | 17700 | 0.0001 | - |
| 1.0426 | 17750 | 0.0018 | - |
| 1.0455 | 17800 | 0.0012 | - |
| 1.0485 | 17850 | 0.0005 | - |
| 1.0514 | 17900 | 0.0006 | - |
| 1.0543 | 17950 | 0.0005 | - |
| 1.0573 | 18000 | 0.0003 | - |
| 1.0602 | 18050 | 0.0041 | - |
| 1.0631 | 18100 | 0.0004 | - |
| 1.0661 | 18150 | 0.0001 | - |
| 1.0690 | 18200 | 0.0008 | - |
| 1.0720 | 18250 | 0.0002 | - |
| 1.0749 | 18300 | 0.0007 | - |
| 1.0778 | 18350 | 0.0016 | - |
| 1.0808 | 18400 | 0.002 | - |
| 1.0837 | 18450 | 0.0004 | - |
| 1.0866 | 18500 | 0.0016 | - |
| 1.0896 | 18550 | 0.0015 | - |
| 1.0925 | 18600 | 0.0003 | - |
| 1.0954 | 18650 | 0.003 | - |
| 1.0984 | 18700 | 0.0011 | - |
| 1.1013 | 18750 | 0.0019 | - |
| 1.1043 | 18800 | 0.0032 | - |
| 1.1072 | 18850 | 0.0001 | - |
| 1.1101 | 18900 | 0.0005 | - |
| 1.1131 | 18950 | 0.0012 | - |
| 1.1160 | 19000 | 0.0019 | - |
| 1.1189 | 19050 | 0.0001 | - |
| 1.1219 | 19100 | 0.0029 | - |
| 1.1248 | 19150 | 0.005 | - |
| 1.1278 | 19200 | 0.0001 | - |
| 1.1307 | 19250 | 0.0001 | - |
| 1.1336 | 19300 | 0.005 | - |
| 1.1366 | 19350 | 0.0021 | - |
| 1.1395 | 19400 | 0.0014 | - |
| 1.1424 | 19450 | 0.0002 | - |
| 1.1454 | 19500 | 0.0049 | - |
| 1.1483 | 19550 | 0.0002 | - |
| 1.1512 | 19600 | 0.001 | - |
| 1.1542 | 19650 | 0.0027 | - |
| 1.1571 | 19700 | 0.0033 | - |
| 1.1601 | 19750 | 0.0023 | - |
| 1.1630 | 19800 | 0.0009 | - |
| 1.1659 | 19850 | 0.0021 | - |
| 1.1689 | 19900 | 0.0003 | - |
| 1.1718 | 19950 | 0.0009 | - |
| 1.1747 | 20000 | 0.0001 | - |
| 1.1777 | 20050 | 0.0001 | - |
| 1.1806 | 20100 | 0.0007 | - |
| 1.1836 | 20150 | 0.0002 | - |
| 1.1865 | 20200 | 0.0001 | - |
| 1.1894 | 20250 | 0.0013 | - |
| 1.1924 | 20300 | 0.0001 | - |
| 1.1953 | 20350 | 0.0001 | - |
| 1.1982 | 20400 | 0.0016 | - |
| 1.2012 | 20450 | 0.0046 | - |
| 1.2041 | 20500 | 0.0007 | - |
| 1.2070 | 20550 | 0.0035 | - |
| 1.2100 | 20600 | 0.0006 | - |
| 1.2129 | 20650 | 0.0001 | - |
| 1.2159 | 20700 | 0.0007 | - |
| 1.2188 | 20750 | 0.0004 | - |
| 1.2217 | 20800 | 0.0054 | - |
| 1.2247 | 20850 | 0.0012 | - |
| 1.2276 | 20900 | 0.0002 | - |
| 1.2305 | 20950 | 0.0008 | - |
| 1.2335 | 21000 | 0.0001 | - |
| 1.2364 | 21050 | 0.0008 | - |
| 1.2394 | 21100 | 0.0001 | - |
| 1.2423 | 21150 | 0.0039 | - |
| 1.2452 | 21200 | 0.0001 | - |
| 1.2482 | 21250 | 0.0038 | - |
| 1.2511 | 21300 | 0.0012 | - |
| 1.2540 | 21350 | 0.0019 | - |
| 1.2570 | 21400 | 0.0012 | - |
| 1.2599 | 21450 | 0.0001 | - |
| 1.2628 | 21500 | 0.0006 | - |
| 1.2658 | 21550 | 0.0016 | - |
| 1.2687 | 21600 | 0.0001 | - |
| 1.2717 | 21650 | 0.001 | - |
| 1.2746 | 21700 | 0.0005 | - |
| 1.2775 | 21750 | 0.0021 | - |
| 1.2805 | 21800 | 0.0001 | - |
| 1.2834 | 21850 | 0.0003 | - |
| 1.2863 | 21900 | 0.0004 | - |
| 1.2893 | 21950 | 0.0001 | - |
| 1.2922 | 22000 | 0.0003 | - |
| 1.2952 | 22050 | 0.0003 | - |
| 1.2981 | 22100 | 0.0017 | - |
| 1.3010 | 22150 | 0.0004 | - |
| 1.3040 | 22200 | 0.0016 | - |
| 1.3069 | 22250 | 0.0009 | - |
| 1.3098 | 22300 | 0.0006 | - |
| 1.3128 | 22350 | 0.0002 | - |
| 1.3157 | 22400 | 0.0002 | - |
| 1.3186 | 22450 | 0.0006 | - |
| 1.3216 | 22500 | 0.0015 | - |
| 1.3245 | 22550 | 0.0019 | - |
| 1.3275 | 22600 | 0.0004 | - |
| 1.3304 | 22650 | 0.0017 | - |
| 1.3333 | 22700 | 0.0042 | - |
| 1.3363 | 22750 | 0.0011 | - |
| 1.3392 | 22800 | 0.0026 | - |
| 1.3421 | 22850 | 0.0036 | - |
| 1.3451 | 22900 | 0.0014 | - |
| 1.3480 | 22950 | 0.0002 | - |
| 1.3510 | 23000 | 0.0001 | - |
| 1.3539 | 23050 | 0.0001 | - |
| 1.3568 | 23100 | 0.0016 | - |
| 1.3598 | 23150 | 0.0002 | - |
| 1.3627 | 23200 | 0.0022 | - |
| 1.3656 | 23250 | 0.0001 | - |
| 1.3686 | 23300 | 0.0001 | - |
| 1.3715 | 23350 | 0.0011 | - |
| 1.3744 | 23400 | 0.0013 | - |
| 1.3774 | 23450 | 0.0004 | - |
| 1.3803 | 23500 | 0.0013 | - |
| 1.3833 | 23550 | 0.0015 | - |
| 1.3862 | 23600 | 0.0003 | - |
| 1.3891 | 23650 | 0.004 | - |
| 1.3921 | 23700 | 0.0035 | - |
| 1.3950 | 23750 | 0.0019 | - |
| 1.3979 | 23800 | 0.0018 | - |
| 1.4009 | 23850 | 0.0017 | - |
| 1.4038 | 23900 | 0.0001 | - |
| 1.4068 | 23950 | 0.0008 | - |
| 1.4097 | 24000 | 0.0012 | - |
| 1.4126 | 24050 | 0.0001 | - |
| 1.4156 | 24100 | 0.0023 | - |
| 1.4185 | 24150 | 0.0004 | - |
| 1.4214 | 24200 | 0.0001 | - |
| 1.4244 | 24250 | 0.0001 | - |
| 1.4273 | 24300 | 0.0015 | - |
| 1.4302 | 24350 | 0.0005 | - |
| 1.4332 | 24400 | 0.0027 | - |
| 1.4361 | 24450 | 0.0017 | - |
| 1.4391 | 24500 | 0.0001 | - |
| 1.4420 | 24550 | 0.0012 | - |
| 1.4449 | 24600 | 0.0011 | - |
| 1.4479 | 24650 | 0.0035 | - |
| 1.4508 | 24700 | 0.0011 | - |
| 1.4537 | 24750 | 0.0008 | - |
| 1.4567 | 24800 | 0.0062 | - |
| 1.4596 | 24850 | 0.0006 | - |
| 1.4626 | 24900 | 0.0029 | - |
| 1.4655 | 24950 | 0.0001 | - |
| 1.4684 | 25000 | 0.0001 | - |
| 1.4714 | 25050 | 0.0006 | - |
| 1.4743 | 25100 | 0.0008 | - |
| 1.4772 | 25150 | 0.0035 | - |
| 1.4802 | 25200 | 0.0025 | - |
| 1.4831 | 25250 | 0.0001 | - |
| 1.4860 | 25300 | 0.0005 | - |
| 1.4890 | 25350 | 0.001 | - |
| 1.4919 | 25400 | 0.0036 | - |
| 1.4949 | 25450 | 0.0006 | - |
| 1.4978 | 25500 | 0.0001 | - |
| 1.5007 | 25550 | 0.0006 | - |
| 1.5037 | 25600 | 0.0015 | - |
| 1.5066 | 25650 | 0.0033 | - |
| 1.5095 | 25700 | 0.0037 | - |
| 1.5125 | 25750 | 0.0049 | - |
| 1.5154 | 25800 | 0.0007 | - |
| 1.5184 | 25850 | 0.0016 | - |
| 1.5213 | 25900 | 0.0002 | - |
| 1.5242 | 25950 | 0.0001 | - |
| 1.5272 | 26000 | 0.002 | - |
| 1.5301 | 26050 | 0.0006 | - |
| 1.5330 | 26100 | 0.0011 | - |
| 1.5360 | 26150 | 0.0016 | - |
| 1.5389 | 26200 | 0.004 | - |
| 1.5419 | 26250 | 0.0016 | - |
| 1.5448 | 26300 | 0.0066 | - |
| 1.5477 | 26350 | 0.0022 | - |
| 1.5507 | 26400 | 0.0008 | - |
| 1.5536 | 26450 | 0.0047 | - |
| 1.5565 | 26500 | 0.0006 | - |
| 1.5595 | 26550 | 0.0018 | - |
| 1.5624 | 26600 | 0.006 | - |
| 1.5653 | 26650 | 0.0002 | - |
| 1.5683 | 26700 | 0.0004 | - |
| 1.5712 | 26750 | 0.0007 | - |
| 1.5742 | 26800 | 0.0013 | - |
| 1.5771 | 26850 | 0.0013 | - |
| 1.5800 | 26900 | 0.0027 | - |
| 1.5830 | 26950 | 0.0008 | - |
| 1.5859 | 27000 | 0.0004 | - |
| 1.5888 | 27050 | 0.0011 | - |
| 1.5918 | 27100 | 0.0007 | - |
| 1.5947 | 27150 | 0.0035 | - |
| 1.5977 | 27200 | 0.005 | - |
| 1.6006 | 27250 | 0.0018 | - |
| 1.6035 | 27300 | 0.0001 | - |
| 1.6065 | 27350 | 0.0025 | - |
| 1.6094 | 27400 | 0.005 | - |
| 1.6123 | 27450 | 0.0009 | - |
| 1.6153 | 27500 | 0.0009 | - |
| 1.6182 | 27550 | 0.0039 | - |
| 1.6211 | 27600 | 0.0027 | - |
| 1.6241 | 27650 | 0.0018 | - |
| 1.6270 | 27700 | 0.0047 | - |
| 1.6300 | 27750 | 0.0001 | - |
| 1.6329 | 27800 | 0.0001 | - |
| 1.6358 | 27850 | 0.0001 | - |
| 1.6388 | 27900 | 0.0021 | - |
| 1.6417 | 27950 | 0.0002 | - |
| 1.6446 | 28000 | 0.0001 | - |
| 1.6476 | 28050 | 0.0012 | - |
| 1.6505 | 28100 | 0.0018 | - |
| 1.6535 | 28150 | 0.0005 | - |
| 1.6564 | 28200 | 0.0012 | - |
| 1.6593 | 28250 | 0.0049 | - |
| 1.6623 | 28300 | 0.0005 | - |
| 1.6652 | 28350 | 0.0009 | - |
| 1.6681 | 28400 | 0.0009 | - |
| 1.6711 | 28450 | 0.003 | - |
| 1.6740 | 28500 | 0.0007 | - |
| 1.6769 | 28550 | 0.0014 | - |
| 1.6799 | 28600 | 0.0005 | - |
| 1.6828 | 28650 | 0.0033 | - |
| 1.6858 | 28700 | 0.0018 | - |
| 1.6887 | 28750 | 0.0004 | - |
| 1.6916 | 28800 | 0.0014 | - |
| 1.6946 | 28850 | 0.0032 | - |
| 1.6975 | 28900 | 0.0037 | - |
| 1.7004 | 28950 | 0.006 | - |
| 1.7034 | 29000 | 0.0007 | - |
| 1.7063 | 29050 | 0.0004 | - |
| 1.7093 | 29100 | 0.0001 | - |
| 1.7122 | 29150 | 0.0001 | - |
| 1.7151 | 29200 | 0.001 | - |
| 1.7181 | 29250 | 0.001 | - |
| 1.7210 | 29300 | 0.0001 | - |
| 1.7239 | 29350 | 0.0014 | - |
| 1.7269 | 29400 | 0.0011 | - |
| 1.7298 | 29450 | 0.001 | - |
| 1.7327 | 29500 | 0.0005 | - |
| 1.7357 | 29550 | 0.001 | - |
| 1.7386 | 29600 | 0.0001 | - |
| 1.7416 | 29650 | 0.0001 | - |
| 1.7445 | 29700 | 0.0001 | - |
| 1.7474 | 29750 | 0.0006 | - |
| 1.7504 | 29800 | 0.0005 | - |
| 1.7533 | 29850 | 0.0005 | - |
| 1.7562 | 29900 | 0.001 | - |
| 1.7592 | 29950 | 0.0009 | - |
| 1.7621 | 30000 | 0.0008 | - |
| 1.7651 | 30050 | 0.0036 | - |
| 1.7680 | 30100 | 0.0078 | - |
| 1.7709 | 30150 | 0.0044 | - |
| 1.7739 | 30200 | 0.0006 | - |
| 1.7768 | 30250 | 0.0006 | - |
| 1.7797 | 30300 | 0.0007 | - |
| 1.7827 | 30350 | 0.003 | - |
| 1.7856 | 30400 | 0.0007 | - |
| 1.7885 | 30450 | 0.0006 | - |
| 1.7915 | 30500 | 0.0005 | - |
| 1.7944 | 30550 | 0.0041 | - |
| 1.7974 | 30600 | 0.0006 | - |
| 1.8003 | 30650 | 0.0005 | - |
| 1.8032 | 30700 | 0.0017 | - |
| 1.8062 | 30750 | 0.0006 | - |
| 1.8091 | 30800 | 0.0009 | - |
| 1.8120 | 30850 | 0.001 | - |
| 1.8150 | 30900 | 0.004 | - |
| 1.8179 | 30950 | 0.0014 | - |
| 1.8209 | 31000 | 0.0022 | - |
| 1.8238 | 31050 | 0.0012 | - |
| 1.8267 | 31100 | 0.0001 | - |
| 1.8297 | 31150 | 0.001 | - |
| 1.8326 | 31200 | 0.0014 | - |
| 1.8355 | 31250 | 0.0045 | - |
| 1.8385 | 31300 | 0.0012 | - |
| 1.8414 | 31350 | 0.0024 | - |
| 1.8443 | 31400 | 0.0007 | - |
| 1.8473 | 31450 | 0.0011 | - |
| 1.8502 | 31500 | 0.0012 | - |
| 1.8532 | 31550 | 0.0001 | - |
| 1.8561 | 31600 | 0.0006 | - |
| 1.8590 | 31650 | 0.0001 | - |
| 1.8620 | 31700 | 0.0031 | - |
| 1.8649 | 31750 | 0.0005 | - |
| 1.8678 | 31800 | 0.0015 | - |
| 1.8708 | 31850 | 0.0001 | - |
| 1.8737 | 31900 | 0.0009 | - |
| 1.8767 | 31950 | 0.0031 | - |
| 1.8796 | 32000 | 0.0001 | - |
| 1.8825 | 32050 | 0.0008 | - |
| 1.8855 | 32100 | 0.0001 | - |
| 1.8884 | 32150 | 0.0009 | - |
| 1.8913 | 32200 | 0.0006 | - |
| 1.8943 | 32250 | 0.0034 | - |
| 1.8972 | 32300 | 0.0006 | - |
| 1.9001 | 32350 | 0.0025 | - |
| 1.9031 | 32400 | 0.0006 | - |
| 1.9060 | 32450 | 0.0005 | - |
| 1.9090 | 32500 | 0.001 | - |
| 1.9119 | 32550 | 0.0009 | - |
| 1.9148 | 32600 | 0.0011 | - |
| 1.9178 | 32650 | 0.0001 | - |
| 1.9207 | 32700 | 0.0018 | - |
| 1.9236 | 32750 | 0.0001 | - |
| 1.9266 | 32800 | 0.0006 | - |
| 1.9295 | 32850 | 0.0007 | - |
| 1.9325 | 32900 | 0.0001 | - |
| 1.9354 | 32950 | 0.001 | - |
| 1.9383 | 33000 | 0.0024 | - |
| 1.9413 | 33050 | 0.0001 | - |
| 1.9442 | 33100 | 0.0001 | - |
| 1.9471 | 33150 | 0.0014 | - |
| 1.9501 | 33200 | 0.001 | - |
| 1.9530 | 33250 | 0.0007 | - |
| 1.9559 | 33300 | 0.0009 | - |
| 1.9589 | 33350 | 0.0001 | - |
| 1.9618 | 33400 | 0.0007 | - |
| 1.9648 | 33450 | 0.001 | - |
| 1.9677 | 33500 | 0.0001 | - |
| 1.9706 | 33550 | 0.0031 | - |
| 1.9736 | 33600 | 0.0005 | - |
| 1.9765 | 33650 | 0.0001 | - |
| 1.9794 | 33700 | 0.0001 | - |
| 1.9824 | 33750 | 0.0009 | - |
| 1.9853 | 33800 | 0.0008 | - |
| 1.9883 | 33850 | 0.0001 | - |
| 1.9912 | 33900 | 0.0027 | - |
| 1.9941 | 33950 | 0.0039 | - |
| 1.9971 | 34000 | 0.0063 | - |
| 2.0 | 34050 | 0.003 | 0.1022 |
| 2.0029 | 34100 | 0.0001 | - |
| 2.0059 | 34150 | 0.0021 | - |
| 2.0088 | 34200 | 0.0014 | - |
| 2.0117 | 34250 | 0.0001 | - |
| 2.0147 | 34300 | 0.0015 | - |
| 2.0176 | 34350 | 0.0001 | - |
| 2.0206 | 34400 | 0.003 | - |
| 2.0235 | 34450 | 0.0016 | - |
| 2.0264 | 34500 | 0.0022 | - |
| 2.0294 | 34550 | 0.0001 | - |
| 2.0323 | 34600 | 0.0001 | - |
| 2.0352 | 34650 | 0.001 | - |
| 2.0382 | 34700 | 0.0007 | - |
| 2.0411 | 34750 | 0.0009 | - |
| 2.0441 | 34800 | 0.0024 | - |
| 2.0470 | 34850 | 0.0009 | - |
| 2.0499 | 34900 | 0.0007 | - |
| 2.0529 | 34950 | 0.0011 | - |
| 2.0558 | 35000 | 0.0008 | - |
| 2.0587 | 35050 | 0.0007 | - |
| 2.0617 | 35100 | 0.0005 | - |
| 2.0646 | 35150 | 0.0001 | - |
| 2.0675 | 35200 | 0.0019 | - |
| 2.0705 | 35250 | 0.001 | - |
| 2.0734 | 35300 | 0.0048 | - |
| 2.0764 | 35350 | 0.0034 | - |
| 2.0793 | 35400 | 0.0012 | - |
| 2.0822 | 35450 | 0.0001 | - |
| 2.0852 | 35500 | 0.0001 | - |
| 2.0881 | 35550 | 0.0006 | - |
| 2.0910 | 35600 | 0.0013 | - |
| 2.0940 | 35650 | 0.0015 | - |
| 2.0969 | 35700 | 0.0004 | - |
| 2.0999 | 35750 | 0.0008 | - |
| 2.1028 | 35800 | 0.0004 | - |
| 2.1057 | 35850 | 0.0007 | - |
| 2.1087 | 35900 | 0.0001 | - |
| 2.1116 | 35950 | 0.0027 | - |
| 2.1145 | 36000 | 0.0064 | - |
| 2.1175 | 36050 | 0.0014 | - |
| 2.1204 | 36100 | 0.0012 | - |
| 2.1233 | 36150 | 0.0006 | - |
| 2.1263 | 36200 | 0.0008 | - |
| 2.1292 | 36250 | 0.0013 | - |
| 2.1322 | 36300 | 0.0028 | - |
| 2.1351 | 36350 | 0.0022 | - |
| 2.1380 | 36400 | 0.0015 | - |
| 2.1410 | 36450 | 0.0012 | - |
| 2.1439 | 36500 | 0.0012 | - |
| 2.1468 | 36550 | 0.0052 | - |
| 2.1498 | 36600 | 0.0022 | - |
| 2.1527 | 36650 | 0.0022 | - |
| 2.1557 | 36700 | 0.0004 | - |
| 2.1586 | 36750 | 0.0003 | - |
| 2.1615 | 36800 | 0.0003 | - |
| 2.1645 | 36850 | 0.0017 | - |
| 2.1674 | 36900 | 0.0007 | - |
| 2.1703 | 36950 | 0.0009 | - |
| 2.1733 | 37000 | 0.0011 | - |
| 2.1762 | 37050 | 0.0005 | - |
| 2.1791 | 37100 | 0.0022 | - |
| 2.1821 | 37150 | 0.002 | - |
| 2.1850 | 37200 | 0.0038 | - |
| 2.1880 | 37250 | 0.0042 | - |
| 2.1909 | 37300 | 0.0006 | - |
| 2.1938 | 37350 | 0.0005 | - |
| 2.1968 | 37400 | 0.0005 | - |
| 2.1997 | 37450 | 0.0005 | - |
| 2.2026 | 37500 | 0.0036 | - |
| 2.2056 | 37550 | 0.0001 | - |
| 2.2085 | 37600 | 0.0005 | - |
| 2.2115 | 37650 | 0.0011 | - |
| 2.2144 | 37700 | 0.0011 | - |
| 2.2173 | 37750 | 0.0001 | - |
| 2.2203 | 37800 | 0.0005 | - |
| 2.2232 | 37850 | 0.0009 | - |
| 2.2261 | 37900 | 0.0001 | - |
| 2.2291 | 37950 | 0.0001 | - |
| 2.2320 | 38000 | 0.0012 | - |
| 2.2349 | 38050 | 0.0004 | - |
| 2.2379 | 38100 | 0.0021 | - |
| 2.2408 | 38150 | 0.0001 | - |
| 2.2438 | 38200 | 0.0031 | - |
| 2.2467 | 38250 | 0.003 | - |
| 2.2496 | 38300 | 0.003 | - |
| 2.2526 | 38350 | 0.0005 | - |
| 2.2555 | 38400 | 0.0009 | - |
| 2.2584 | 38450 | 0.0001 | - |
| 2.2614 | 38500 | 0.0001 | - |
| 2.2643 | 38550 | 0.0006 | - |
| 2.2673 | 38600 | 0.0005 | - |
| 2.2702 | 38650 | 0.0013 | - |
| 2.2731 | 38700 | 0.0007 | - |
| 2.2761 | 38750 | 0.0008 | - |
| 2.2790 | 38800 | 0.0005 | - |
| 2.2819 | 38850 | 0.0015 | - |
| 2.2849 | 38900 | 0.0009 | - |
| 2.2878 | 38950 | 0.0006 | - |
| 2.2907 | 39000 | 0.0002 | - |
| 2.2937 | 39050 | 0.0001 | - |
| 2.2966 | 39100 | 0.0011 | - |
| 2.2996 | 39150 | 0.0005 | - |
| 2.3025 | 39200 | 0.001 | - |
| 2.3054 | 39250 | 0.0001 | - |
| 2.3084 | 39300 | 0.0013 | - |
| 2.3113 | 39350 | 0.001 | - |
| 2.3142 | 39400 | 0.0044 | - |
| 2.3172 | 39450 | 0.0011 | - |
| 2.3201 | 39500 | 0.0015 | - |
| 2.3231 | 39550 | 0.0005 | - |
| 2.3260 | 39600 | 0.0001 | - |
| 2.3289 | 39650 | 0.0001 | - |
| 2.3319 | 39700 | 0.0004 | - |
| 2.3348 | 39750 | 0.0001 | - |
| 2.3377 | 39800 | 0.001 | - |
| 2.3407 | 39850 | 0.0049 | - |
| 2.3436 | 39900 | 0.0006 | - |
| 2.3465 | 39950 | 0.0028 | - |
| 2.3495 | 40000 | 0.0001 | - |
| 2.3524 | 40050 | 0.0063 | - |
| 2.3554 | 40100 | 0.0001 | - |
| 2.3583 | 40150 | 0.0013 | - |
| 2.3612 | 40200 | 0.0024 | - |
| 2.3642 | 40250 | 0.0028 | - |
| 2.3671 | 40300 | 0.0039 | - |
| 2.3700 | 40350 | 0.0023 | - |
| 2.3730 | 40400 | 0.0009 | - |
| 2.3759 | 40450 | 0.0002 | - |
| 2.3789 | 40500 | 0.0007 | - |
| 2.3818 | 40550 | 0.0004 | - |
| 2.3847 | 40600 | 0.001 | - |
| 2.3877 | 40650 | 0.0001 | - |
| 2.3906 | 40700 | 0.0026 | - |
| 2.3935 | 40750 | 0.0001 | - |
| 2.3965 | 40800 | 0.0003 | - |
| 2.3994 | 40850 | 0.0034 | - |
| 2.4023 | 40900 | 0.0022 | - |
| 2.4053 | 40950 | 0.0006 | - |
| 2.4082 | 41000 | 0.0015 | - |
| 2.4112 | 41050 | 0.0012 | - |
| 2.4141 | 41100 | 0.0006 | - |
| 2.4170 | 41150 | 0.0003 | - |
| 2.4200 | 41200 | 0.0001 | - |
| 2.4229 | 41250 | 0.0013 | - |
| 2.4258 | 41300 | 0.0001 | - |
| 2.4288 | 41350 | 0.0021 | - |
| 2.4317 | 41400 | 0.0001 | - |
| 2.4347 | 41450 | 0.0003 | - |
| 2.4376 | 41500 | 0.001 | - |
| 2.4405 | 41550 | 0.0012 | - |
| 2.4435 | 41600 | 0.0032 | - |
| 2.4464 | 41650 | 0.0036 | - |
| 2.4493 | 41700 | 0.0008 | - |
| 2.4523 | 41750 | 0.0002 | - |
| 2.4552 | 41800 | 0.0032 | - |
| 2.4581 | 41850 | 0.0004 | - |
| 2.4611 | 41900 | 0.0014 | - |
| 2.4640 | 41950 | 0.001 | - |
| 2.4670 | 42000 | 0.0007 | - |
| 2.4699 | 42050 | 0.0011 | - |
| 2.4728 | 42100 | 0.0057 | - |
| 2.4758 | 42150 | 0.0014 | - |
| 2.4787 | 42200 | 0.0011 | - |
| 2.4816 | 42250 | 0.0007 | - |
| 2.4846 | 42300 | 0.0036 | - |
| 2.4875 | 42350 | 0.0001 | - |
| 2.4905 | 42400 | 0.0005 | - |
| 2.4934 | 42450 | 0.0037 | - |
| 2.4963 | 42500 | 0.0005 | - |
| 2.4993 | 42550 | 0.0006 | - |
| 2.5022 | 42600 | 0.0032 | - |
| 2.5051 | 42650 | 0.0001 | - |
| 2.5081 | 42700 | 0.0001 | - |
| 2.5110 | 42750 | 0.001 | - |
| 2.5140 | 42800 | 0.001 | - |
| 2.5169 | 42850 | 0.0007 | - |
| 2.5198 | 42900 | 0.0014 | - |
| 2.5228 | 42950 | 0.0002 | - |
| 2.5257 | 43000 | 0.0001 | - |
| 2.5286 | 43050 | 0.0006 | - |
| 2.5316 | 43100 | 0.0001 | - |
| 2.5345 | 43150 | 0.0001 | - |
| 2.5374 | 43200 | 0.0001 | - |
| 2.5404 | 43250 | 0.0033 | - |
| 2.5433 | 43300 | 0.0021 | - |
| 2.5463 | 43350 | 0.0008 | - |
| 2.5492 | 43400 | 0.0028 | - |
| 2.5521 | 43450 | 0.0001 | - |
| 2.5551 | 43500 | 0.0007 | - |
| 2.5580 | 43550 | 0.0005 | - |
| 2.5609 | 43600 | 0.0014 | - |
| 2.5639 | 43650 | 0.0001 | - |
| 2.5668 | 43700 | 0.0008 | - |
| 2.5698 | 43750 | 0.0006 | - |
| 2.5727 | 43800 | 0.0044 | - |
| 2.5756 | 43850 | 0.0033 | - |
| 2.5786 | 43900 | 0.001 | - |
| 2.5815 | 43950 | 0.0047 | - |
| 2.5844 | 44000 | 0.0005 | - |
| 2.5874 | 44050 | 0.0006 | - |
| 2.5903 | 44100 | 0.0004 | - |
| 2.5932 | 44150 | 0.0027 | - |
| 2.5962 | 44200 | 0.0021 | - |
| 2.5991 | 44250 | 0.0009 | - |
| 2.6021 | 44300 | 0.0013 | - |
| 2.6050 | 44350 | 0.0021 | - |
| 2.6079 | 44400 | 0.0007 | - |
| 2.6109 | 44450 | 0.0018 | - |
| 2.6138 | 44500 | 0.0006 | - |
| 2.6167 | 44550 | 0.0006 | - |
| 2.6197 | 44600 | 0.0015 | - |
| 2.6226 | 44650 | 0.0027 | - |
| 2.6256 | 44700 | 0.0001 | - |
| 2.6285 | 44750 | 0.0005 | - |
| 2.6314 | 44800 | 0.001 | - |
| 2.6344 | 44850 | 0.0029 | - |
| 2.6373 | 44900 | 0.0014 | - |
| 2.6402 | 44950 | 0.0007 | - |
| 2.6432 | 45000 | 0.0011 | - |
| 2.6461 | 45050 | 0.0016 | - |
| 2.6490 | 45100 | 0.0017 | - |
| 2.6520 | 45150 | 0.0001 | - |
| 2.6549 | 45200 | 0.0026 | - |
| 2.6579 | 45250 | 0.0011 | - |
| 2.6608 | 45300 | 0.0063 | - |
| 2.6637 | 45350 | 0.0035 | - |
| 2.6667 | 45400 | 0.0001 | - |
| 2.6696 | 45450 | 0.0001 | - |
| 2.6725 | 45500 | 0.0027 | - |
| 2.6755 | 45550 | 0.0004 | - |
| 2.6784 | 45600 | 0.0041 | - |
| 2.6814 | 45650 | 0.0024 | - |
| 2.6843 | 45700 | 0.0001 | - |
| 2.6872 | 45750 | 0.0037 | - |
| 2.6902 | 45800 | 0.0005 | - |
| 2.6931 | 45850 | 0.0001 | - |
| 2.6960 | 45900 | 0.0015 | - |
| 2.6990 | 45950 | 0.0001 | - |
| 2.7019 | 46000 | 0.0048 | - |
| 2.7048 | 46050 | 0.0007 | - |
| 2.7078 | 46100 | 0.0026 | - |
| 2.7107 | 46150 | 0.0005 | - |
| 2.7137 | 46200 | 0.0009 | - |
| 2.7166 | 46250 | 0.001 | - |
| 2.7195 | 46300 | 0.0001 | - |
| 2.7225 | 46350 | 0.0004 | - |
| 2.7254 | 46400 | 0.0011 | - |
| 2.7283 | 46450 | 0.0018 | - |
| 2.7313 | 46500 | 0.0005 | - |
| 2.7342 | 46550 | 0.0006 | - |
| 2.7372 | 46600 | 0.0001 | - |
| 2.7401 | 46650 | 0.0001 | - |
| 2.7430 | 46700 | 0.0031 | - |
| 2.7460 | 46750 | 0.0001 | - |
| 2.7489 | 46800 | 0.0005 | - |
| 2.7518 | 46850 | 0.0032 | - |
| 2.7548 | 46900 | 0.0006 | - |
| 2.7577 | 46950 | 0.0001 | - |
| 2.7606 | 47000 | 0.0006 | - |
| 2.7636 | 47050 | 0.0004 | - |
| 2.7665 | 47100 | 0.0001 | - |
| 2.7695 | 47150 | 0.0001 | - |
| 2.7724 | 47200 | 0.0001 | - |
| 2.7753 | 47250 | 0.0001 | - |
| 2.7783 | 47300 | 0.0005 | - |
| 2.7812 | 47350 | 0.0005 | - |
| 2.7841 | 47400 | 0.0011 | - |
| 2.7871 | 47450 | 0.0005 | - |
| 2.7900 | 47500 | 0.0001 | - |
| 2.7930 | 47550 | 0.0001 | - |
| 2.7959 | 47600 | 0.0031 | - |
| 2.7988 | 47650 | 0.0006 | - |
| 2.8018 | 47700 | 0.0001 | - |
| 2.8047 | 47750 | 0.0012 | - |
| 2.8076 | 47800 | 0.0005 | - |
| 2.8106 | 47850 | 0.0014 | - |
| 2.8135 | 47900 | 0.0004 | - |
| 2.8164 | 47950 | 0.0007 | - |
| 2.8194 | 48000 | 0.001 | - |
| 2.8223 | 48050 | 0.0006 | - |
| 2.8253 | 48100 | 0.0005 | - |
| 2.8282 | 48150 | 0.0009 | - |
| 2.8311 | 48200 | 0.0024 | - |
| 2.8341 | 48250 | 0.0033 | - |
| 2.8370 | 48300 | 0.0001 | - |
| 2.8399 | 48350 | 0.001 | - |
| 2.8429 | 48400 | 0.0006 | - |
| 2.8458 | 48450 | 0.0004 | - |
| 2.8488 | 48500 | 0.001 | - |
| 2.8517 | 48550 | 0.0011 | - |
| 2.8546 | 48600 | 0.0065 | - |
| 2.8576 | 48650 | 0.0008 | - |
| 2.8605 | 48700 | 0.0029 | - |
| 2.8634 | 48750 | 0.002 | - |
| 2.8664 | 48800 | 0.0001 | - |
| 2.8693 | 48850 | 0.0001 | - |
| 2.8722 | 48900 | 0.0084 | - |
| 2.8752 | 48950 | 0.0025 | - |
| 2.8781 | 49000 | 0.0027 | - |
| 2.8811 | 49050 | 0.0005 | - |
| 2.8840 | 49100 | 0.0011 | - |
| 2.8869 | 49150 | 0.0005 | - |
| 2.8899 | 49200 | 0.0041 | - |
| 2.8928 | 49250 | 0.0015 | - |
| 2.8957 | 49300 | 0.0032 | - |
| 2.8987 | 49350 | 0.0028 | - |
| 2.9016 | 49400 | 0.0004 | - |
| 2.9046 | 49450 | 0.001 | - |
| 2.9075 | 49500 | 0.0001 | - |
| 2.9104 | 49550 | 0.0001 | - |
| 2.9134 | 49600 | 0.0041 | - |
| 2.9163 | 49650 | 0.0005 | - |
| 2.9192 | 49700 | 0.0005 | - |
| 2.9222 | 49750 | 0.0001 | - |
| 2.9251 | 49800 | 0.0005 | - |
| 2.9280 | 49850 | 0.0001 | - |
| 2.9310 | 49900 | 0.0016 | - |
| 2.9339 | 49950 | 0.0013 | - |
| 2.9369 | 50000 | 0.0008 | - |
| 2.9398 | 50050 | 0.0005 | - |
| 2.9427 | 50100 | 0.0005 | - |
| 2.9457 | 50150 | 0.0004 | - |
| 2.9486 | 50200 | 0.0001 | - |
| 2.9515 | 50250 | 0.0006 | - |
| 2.9545 | 50300 | 0.0007 | - |
| 2.9574 | 50350 | 0.0039 | - |
| 2.9604 | 50400 | 0.0014 | - |
| 2.9633 | 50450 | 0.0004 | - |
| 2.9662 | 50500 | 0.0001 | - |
| 2.9692 | 50550 | 0.0007 | - |
| 2.9721 | 50600 | 0.0001 | - |
| 2.9750 | 50650 | 0.0001 | - |
| 2.9780 | 50700 | 0.0024 | - |
| 2.9809 | 50750 | 0.0052 | - |
| 2.9838 | 50800 | 0.0032 | - |
| 2.9868 | 50850 | 0.0011 | - |
| 2.9897 | 50900 | 0.0021 | - |
| 2.9927 | 50950 | 0.0008 | - |
| 2.9956 | 51000 | 0.0004 | - |
| 2.9985 | 51050 | 0.0001 | - |
| 3.0 | 51075 | - | 0.0483 |
| 3.0015 | 51100 | 0.0013 | - |
| 3.0044 | 51150 | 0.0001 | - |
| 3.0073 | 51200 | 0.0004 | - |
| 3.0103 | 51250 | 0.0013 | - |
| 3.0132 | 51300 | 0.0029 | - |
| 3.0162 | 51350 | 0.0001 | - |
| 3.0191 | 51400 | 0.0008 | - |
| 3.0220 | 51450 | 0.0001 | - |
| 3.0250 | 51500 | 0.0001 | - |
| 3.0279 | 51550 | 0.0008 | - |
| 3.0308 | 51600 | 0.0037 | - |
| 3.0338 | 51650 | 0.0033 | - |
| 3.0367 | 51700 | 0.0001 | - |
| 3.0396 | 51750 | 0.0006 | - |
| 3.0426 | 51800 | 0.0007 | - |
| 3.0455 | 51850 | 0.0005 | - |
| 3.0485 | 51900 | 0.0008 | - |
| 3.0514 | 51950 | 0.0006 | - |
| 3.0543 | 52000 | 0.0006 | - |
| 3.0573 | 52050 | 0.0005 | - |
| 3.0602 | 52100 | 0.001 | - |
| 3.0631 | 52150 | 0.0009 | - |
| 3.0661 | 52200 | 0.0009 | - |
| 3.0690 | 52250 | 0.0012 | - |
| 3.0720 | 52300 | 0.0025 | - |
| 3.0749 | 52350 | 0.0001 | - |
| 3.0778 | 52400 | 0.0005 | - |
| 3.0808 | 52450 | 0.0006 | - |
| 3.0837 | 52500 | 0.0011 | - |
| 3.0866 | 52550 | 0.001 | - |
| 3.0896 | 52600 | 0.0011 | - |
| 3.0925 | 52650 | 0.0011 | - |
| 3.0954 | 52700 | 0.0006 | - |
| 3.0984 | 52750 | 0.0009 | - |
| 3.1013 | 52800 | 0.001 | - |
| 3.1043 | 52850 | 0.0008 | - |
| 3.1072 | 52900 | 0.0006 | - |
| 3.1101 | 52950 | 0.0033 | - |
| 3.1131 | 53000 | 0.0005 | - |
| 3.1160 | 53050 | 0.001 | - |
| 3.1189 | 53100 | 0.006 | - |
| 3.1219 | 53150 | 0.0005 | - |
| 3.1248 | 53200 | 0.0023 | - |
| 3.1278 | 53250 | 0.0017 | - |
| 3.1307 | 53300 | 0.0001 | - |
| 3.1336 | 53350 | 0.0025 | - |
| 3.1366 | 53400 | 0.0001 | - |
| 3.1395 | 53450 | 0.0006 | - |
| 3.1424 | 53500 | 0.0014 | - |
| 3.1454 | 53550 | 0.0006 | - |
| 3.1483 | 53600 | 0.0004 | - |
| 3.1512 | 53650 | 0.0026 | - |
| 3.1542 | 53700 | 0.0015 | - |
| 3.1571 | 53750 | 0.001 | - |
| 3.1601 | 53800 | 0.0009 | - |
| 3.1630 | 53850 | 0.001 | - |
| 3.1659 | 53900 | 0.0009 | - |
| 3.1689 | 53950 | 0.0006 | - |
| 3.1718 | 54000 | 0.0005 | - |
| 3.1747 | 54050 | 0.0041 | - |
| 3.1777 | 54100 | 0.0006 | - |
| 3.1806 | 54150 | 0.0008 | - |
| 3.1836 | 54200 | 0.0011 | - |
| 3.1865 | 54250 | 0.0001 | - |
| 3.1894 | 54300 | 0.0005 | - |
| 3.1924 | 54350 | 0.0004 | - |
| 3.1953 | 54400 | 0.0005 | - |
| 3.1982 | 54450 | 0.0056 | - |
| 3.2012 | 54500 | 0.0024 | - |
| 3.2041 | 54550 | 0.0014 | - |
| 3.2070 | 54600 | 0.0016 | - |
| 3.2100 | 54650 | 0.0009 | - |
| 3.2129 | 54700 | 0.0005 | - |
| 3.2159 | 54750 | 0.0004 | - |
| 3.2188 | 54800 | 0.0006 | - |
| 3.2217 | 54850 | 0.0008 | - |
| 3.2247 | 54900 | 0.0006 | - |
| 3.2276 | 54950 | 0.001 | - |
| 3.2305 | 55000 | 0.0004 | - |
| 3.2335 | 55050 | 0.001 | - |
| 3.2364 | 55100 | 0.0008 | - |
| 3.2394 | 55150 | 0.0044 | - |
| 3.2423 | 55200 | 0.0029 | - |
| 3.2452 | 55250 | 0.0001 | - |
| 3.2482 | 55300 | 0.0012 | - |
| 3.2511 | 55350 | 0.0006 | - |
| 3.2540 | 55400 | 0.0001 | - |
| 3.2570 | 55450 | 0.0011 | - |
| 3.2599 | 55500 | 0.0005 | - |
| 3.2628 | 55550 | 0.0005 | - |
| 3.2658 | 55600 | 0.0001 | - |
| 3.2687 | 55650 | 0.0001 | - |
| 3.2717 | 55700 | 0.0008 | - |
| 3.2746 | 55750 | 0.001 | - |
| 3.2775 | 55800 | 0.0009 | - |
| 3.2805 | 55850 | 0.003 | - |
| 3.2834 | 55900 | 0.0019 | - |
| 3.2863 | 55950 | 0.0031 | - |
| 3.2893 | 56000 | 0.0012 | - |
| 3.2922 | 56050 | 0.0055 | - |
| 3.2952 | 56100 | 0.0007 | - |
| 3.2981 | 56150 | 0.0036 | - |
| 3.3010 | 56200 | 0.0001 | - |
| 3.3040 | 56250 | 0.0013 | - |
| 3.3069 | 56300 | 0.0004 | - |
| 3.3098 | 56350 | 0.0033 | - |
| 3.3128 | 56400 | 0.0001 | - |
| 3.3157 | 56450 | 0.0 | - |
| 3.3186 | 56500 | 0.0025 | - |
| 3.3216 | 56550 | 0.0021 | - |
| 3.3245 | 56600 | 0.0026 | - |
| 3.3275 | 56650 | 0.0011 | - |
| 3.3304 | 56700 | 0.0035 | - |
| 3.3333 | 56750 | 0.0006 | - |
| 3.3363 | 56800 | 0.0045 | - |
| 3.3392 | 56850 | 0.0006 | - |
| 3.3421 | 56900 | 0.0005 | - |
| 3.3451 | 56950 | 0.0007 | - |
| 3.3480 | 57000 | 0.0001 | - |
| 3.3510 | 57050 | 0.0039 | - |
| 3.3539 | 57100 | 0.0011 | - |
| 3.3568 | 57150 | 0.0006 | - |
| 3.3598 | 57200 | 0.0009 | - |
| 3.3627 | 57250 | 0.0009 | - |
| 3.3656 | 57300 | 0.0012 | - |
| 3.3686 | 57350 | 0.0013 | - |
| 3.3715 | 57400 | 0.0006 | - |
| 3.3744 | 57450 | 0.0001 | - |
| 3.3774 | 57500 | 0.0031 | - |
| 3.3803 | 57550 | 0.0004 | - |
| 3.3833 | 57600 | 0.0056 | - |
| 3.3862 | 57650 | 0.001 | - |
| 3.3891 | 57700 | 0.0 | - |
| 3.3921 | 57750 | 0.0006 | - |
| 3.3950 | 57800 | 0.0017 | - |
| 3.3979 | 57850 | 0.0014 | - |
| 3.4009 | 57900 | 0.0031 | - |
| 3.4038 | 57950 | 0.0001 | - |
| 3.4068 | 58000 | 0.0004 | - |
| 3.4097 | 58050 | 0.0001 | - |
| 3.4126 | 58100 | 0.0001 | - |
| 3.4156 | 58150 | 0.0007 | - |
| 3.4185 | 58200 | 0.0015 | - |
| 3.4214 | 58250 | 0.0015 | - |
| 3.4244 | 58300 | 0.0008 | - |
| 3.4273 | 58350 | 0.0007 | - |
| 3.4302 | 58400 | 0.0014 | - |
| 3.4332 | 58450 | 0.0006 | - |
| 3.4361 | 58500 | 0.0017 | - |
| 3.4391 | 58550 | 0.0005 | - |
| 3.4420 | 58600 | 0.0033 | - |
| 3.4449 | 58650 | 0.0001 | - |
| 3.4479 | 58700 | 0.0027 | - |
| 3.4508 | 58750 | 0.0009 | - |
| 3.4537 | 58800 | 0.0001 | - |
| 3.4567 | 58850 | 0.0001 | - |
| 3.4596 | 58900 | 0.0 | - |
| 3.4626 | 58950 | 0.0001 | - |
| 3.4655 | 59000 | 0.0 | - |
| 3.4684 | 59050 | 0.0003 | - |
| 3.4714 | 59100 | 0.0009 | - |
| 3.4743 | 59150 | 0.0001 | - |
| 3.4772 | 59200 | 0.0007 | - |
| 3.4802 | 59250 | 0.0027 | - |
| 3.4831 | 59300 | 0.001 | - |
| 3.4860 | 59350 | 0.0 | - |
| 3.4890 | 59400 | 0.0013 | - |
| 3.4919 | 59450 | 0.0001 | - |
| 3.4949 | 59500 | 0.0015 | - |
| 3.4978 | 59550 | 0.0008 | - |
| 3.5007 | 59600 | 0.0029 | - |
| 3.5037 | 59650 | 0.0 | - |
| 3.5066 | 59700 | 0.0 | - |
| 3.5095 | 59750 | 0.0008 | - |
| 3.5125 | 59800 | 0.0004 | - |
| 3.5154 | 59850 | 0.0 | - |
| 3.5184 | 59900 | 0.003 | - |
| 3.5213 | 59950 | 0.0014 | - |
| 3.5242 | 60000 | 0.001 | - |
| 3.5272 | 60050 | 0.0001 | - |
| 3.5301 | 60100 | 0.0015 | - |
| 3.5330 | 60150 | 0.0005 | - |
| 3.5360 | 60200 | 0.0007 | - |
| 3.5389 | 60250 | 0.0007 | - |
| 3.5419 | 60300 | 0.0026 | - |
| 3.5448 | 60350 | 0.0001 | - |
| 3.5477 | 60400 | 0.0008 | - |
| 3.5507 | 60450 | 0.0028 | - |
| 3.5536 | 60500 | 0.0004 | - |
| 3.5565 | 60550 | 0.0007 | - |
| 3.5595 | 60600 | 0.0013 | - |
| 3.5624 | 60650 | 0.0 | - |
| 3.5653 | 60700 | 0.0008 | - |
| 3.5683 | 60750 | 0.0007 | - |
| 3.5712 | 60800 | 0.0023 | - |
| 3.5742 | 60850 | 0.0004 | - |
| 3.5771 | 60900 | 0.0012 | - |
| 3.5800 | 60950 | 0.0027 | - |
| 3.5830 | 61000 | 0.0008 | - |
| 3.5859 | 61050 | 0.0009 | - |
| 3.5888 | 61100 | 0.0007 | - |
| 3.5918 | 61150 | 0.0017 | - |
| 3.5947 | 61200 | 0.0001 | - |
| 3.5977 | 61250 | 0.003 | - |
| 3.6006 | 61300 | 0.0044 | - |
| 3.6035 | 61350 | 0.0009 | - |
| 3.6065 | 61400 | 0.0043 | - |
| 3.6094 | 61450 | 0.0025 | - |
| 3.6123 | 61500 | 0.0032 | - |
| 3.6153 | 61550 | 0.0001 | - |
| 3.6182 | 61600 | 0.0004 | - |
| 3.6211 | 61650 | 0.0051 | - |
| 3.6241 | 61700 | 0.0031 | - |
| 3.6270 | 61750 | 0.0 | - |
| 3.6300 | 61800 | 0.0001 | - |
| 3.6329 | 61850 | 0.0001 | - |
| 3.6358 | 61900 | 0.0005 | - |
| 3.6388 | 61950 | 0.0 | - |
| 3.6417 | 62000 | 0.0011 | - |
| 3.6446 | 62050 | 0.0006 | - |
| 3.6476 | 62100 | 0.0008 | - |
| 3.6505 | 62150 | 0.0006 | - |
| 3.6535 | 62200 | 0.0006 | - |
| 3.6564 | 62250 | 0.0 | - |
| 3.6593 | 62300 | 0.001 | - |
| 3.6623 | 62350 | 0.0028 | - |
| 3.6652 | 62400 | 0.0001 | - |
| 3.6681 | 62450 | 0.0004 | - |
| 3.6711 | 62500 | 0.0044 | - |
| 3.6740 | 62550 | 0.0015 | - |
| 3.6769 | 62600 | 0.0001 | - |
| 3.6799 | 62650 | 0.0006 | - |
| 3.6828 | 62700 | 0.0046 | - |
| 3.6858 | 62750 | 0.0005 | - |
| 3.6887 | 62800 | 0.0 | - |
| 3.6916 | 62850 | 0.0005 | - |
| 3.6946 | 62900 | 0.001 | - |
| 3.6975 | 62950 | 0.0006 | - |
| 3.7004 | 63000 | 0.0026 | - |
| 3.7034 | 63050 | 0.0 | - |
| 3.7063 | 63100 | 0.0001 | - |
| 3.7093 | 63150 | 0.0001 | - |
| 3.7122 | 63200 | 0.0001 | - |
| 3.7151 | 63250 | 0.0038 | - |
| 3.7181 | 63300 | 0.0001 | - |
| 3.7210 | 63350 | 0.0029 | - |
| 3.7239 | 63400 | 0.0006 | - |
| 3.7269 | 63450 | 0.0035 | - |
| 3.7298 | 63500 | 0.0005 | - |
| 3.7327 | 63550 | 0.0 | - |
| 3.7357 | 63600 | 0.0009 | - |
| 3.7386 | 63650 | 0.0014 | - |
| 3.7416 | 63700 | 0.003 | - |
| 3.7445 | 63750 | 0.0001 | - |
| 3.7474 | 63800 | 0.0 | - |
| 3.7504 | 63850 | 0.0001 | - |
| 3.7533 | 63900 | 0.001 | - |
| 3.7562 | 63950 | 0.0045 | - |
| 3.7592 | 64000 | 0.004 | - |
| 3.7621 | 64050 | 0.0005 | - |
| 3.7651 | 64100 | 0.001 | - |
| 3.7680 | 64150 | 0.0005 | - |
| 3.7709 | 64200 | 0.0009 | - |
| 3.7739 | 64250 | 0.0001 | - |
| 3.7768 | 64300 | 0.0001 | - |
| 3.7797 | 64350 | 0.0015 | - |
| 3.7827 | 64400 | 0.0005 | - |
| 3.7856 | 64450 | 0.001 | - |
| 3.7885 | 64500 | 0.0001 | - |
| 3.7915 | 64550 | 0.001 | - |
| 3.7944 | 64600 | 0.0005 | - |
| 3.7974 | 64650 | 0.0001 | - |
| 3.8003 | 64700 | 0.001 | - |
| 3.8032 | 64750 | 0.0001 | - |
| 3.8062 | 64800 | 0.001 | - |
| 3.8091 | 64850 | 0.003 | - |
| 3.8120 | 64900 | 0.0027 | - |
| 3.8150 | 64950 | 0.0014 | - |
| 3.8179 | 65000 | 0.0005 | - |
| 3.8209 | 65050 | 0.0005 | - |
| 3.8238 | 65100 | 0.0009 | - |
| 3.8267 | 65150 | 0.0 | - |
| 3.8297 | 65200 | 0.0034 | - |
| 3.8326 | 65250 | 0.0005 | - |
| 3.8355 | 65300 | 0.0036 | - |
| 3.8385 | 65350 | 0.0014 | - |
| 3.8414 | 65400 | 0.0005 | - |
| 3.8443 | 65450 | 0.0004 | - |
| 3.8473 | 65500 | 0.0011 | - |
| 3.8502 | 65550 | 0.0001 | - |
| 3.8532 | 65600 | 0.0035 | - |
| 3.8561 | 65650 | 0.0006 | - |
| 3.8590 | 65700 | 0.0 | - |
| 3.8620 | 65750 | 0.0005 | - |
| 3.8649 | 65800 | 0.003 | - |
| 3.8678 | 65850 | 0.0005 | - |
| 3.8708 | 65900 | 0.0029 | - |
| 3.8737 | 65950 | 0.0011 | - |
| 3.8767 | 66000 | 0.0005 | - |
| 3.8796 | 66050 | 0.0005 | - |
| 3.8825 | 66100 | 0.001 | - |
| 3.8855 | 66150 | 0.0001 | - |
| 3.8884 | 66200 | 0.0001 | - |
| 3.8913 | 66250 | 0.0025 | - |
| 3.8943 | 66300 | 0.001 | - |
| 3.8972 | 66350 | 0.0036 | - |
| 3.9001 | 66400 | 0.003 | - |
| 3.9031 | 66450 | 0.0019 | - |
| 3.9060 | 66500 | 0.003 | - |
| 3.9090 | 66550 | 0.0006 | - |
| 3.9119 | 66600 | 0.0 | - |
| 3.9148 | 66650 | 0.0 | - |
| 3.9178 | 66700 | 0.0018 | - |
| 3.9207 | 66750 | 0.0005 | - |
| 3.9236 | 66800 | 0.001 | - |
| 3.9266 | 66850 | 0.0001 | - |
| 3.9295 | 66900 | 0.0005 | - |
| 3.9325 | 66950 | 0.0032 | - |
| 3.9354 | 67000 | 0.0033 | - |
| 3.9383 | 67050 | 0.001 | - |
| 3.9413 | 67100 | 0.0014 | - |
| 3.9442 | 67150 | 0.0 | - |
| 3.9471 | 67200 | 0.0005 | - |
| 3.9501 | 67250 | 0.0005 | - |
| 3.9530 | 67300 | 0.0035 | - |
| 3.9559 | 67350 | 0.0001 | - |
| 3.9589 | 67400 | 0.0015 | - |
| 3.9618 | 67450 | 0.001 | - |
| 3.9648 | 67500 | 0.0031 | - |
| 3.9677 | 67550 | 0.0001 | - |
| 3.9706 | 67600 | 0.0001 | - |
| 3.9736 | 67650 | 0.001 | - |
| 3.9765 | 67700 | 0.0059 | - |
| 3.9794 | 67750 | 0.0005 | - |
| 3.9824 | 67800 | 0.0005 | - |
| 3.9853 | 67850 | 0.0028 | - |
| 3.9883 | 67900 | 0.0015 | - |
| 3.9912 | 67950 | 0.0029 | - |
| 3.9941 | 68000 | 0.0033 | - |
| 3.9971 | 68050 | 0.001 | - |
| 4.0 | 68100 | 0.0005 | 0.0464 |
### Framework Versions
- Python: 3.11.11
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "The chatbot understood my queries perfectly."}, {"text": "who are you"}, {"text": "I love the interface of the website."}, {"text": "au revoir"}, {"text": "I like how personalized the recommendations feel."}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/all-MiniLM-L6-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9047619047619048, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,424 |
sdadas/stella-pl-retrieval
|
sdadas
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"qwen2",
"text-generation",
"feature-extraction",
"sentence-similarity",
"transformers",
"custom_code",
"pl",
"arxiv:2402.13350",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-28T17:29:12Z |
2024-10-02T11:40:35+00:00
| 415 | 10 |
---
language: pl
license: gemma
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: 'zapytanie: Jak dożyć 100 lat?'
sentences:
- Trzeba zdrowo się odżywiać i uprawiać sport.
- Trzeba pić alkohol, imprezować i jeździć szybkimi autami.
- Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego
handlu.
---
<h1 align="center">Stella-PL-retrieval</h1>
This is a text encoder based on [stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) and further fine-tuned for Polish information retrieval tasks.
- In the first step, we adapted the model for Polish with [multilingual knowledge distillation method](https://aclanthology.org/2020.emnlp-main.365/) using a diverse corpus of 20 million Polish-English text pairs.
- The second step involved fine-tuning the model with contrastrive loss using a dataset consisting of 1.4 million queries. Positive and negative passages for each query have been selected with the help of [BAAI/bge-reranker-v2.5-gemma2-lightweight](https://huggingface.co/BAAI/bge-reranker-v2.5-gemma2-lightweight) reranker. The model was trained for three epochs with a batch size of 1024 queries.
The encoder transforms texts to 1024 dimensional vectors. The model is optimized specifically for Polish information retrieval tasks. If you need a more versatile encoder, suitable for a wider range of tasks such as semantic similarity or clustering, you should probably use the distilled version from the first step: [sdadas/stella-pl](https://huggingface.co/sdadas/stella-pl).
## Usage (Sentence-Transformers)
The model utilizes the same prompts as the original [stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5).
For retrieval, queries should be prefixed with **"Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: "**.
For symmetric tasks such as semantic similarity, both texts should be prefixed with **"Instruct: Retrieve semantically similar text.\nQuery: "**.
Please note that the model uses a custom implementation, so you should add `trust_remote_code=True` argument when loading it.
It is also recommended to use Flash Attention 2, which can be enabled with `attn_implementation` argument.
You can use the model like this with [sentence-transformers](https://www.SBERT.net):
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
"sdadas/stella-pl-retrieval",
trust_remote_code=True,
device="cuda",
model_kwargs={"attn_implementation": "flash_attention_2", "trust_remote_code": True}
)
model.bfloat16()
# Retrieval example
query_prefix = "Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: "
queries = [query_prefix + "Jak dożyć 100 lat?"]
answers = [
"Trzeba zdrowo się odżywiać i uprawiać sport.",
"Trzeba pić alkohol, imprezować i jeździć szybkimi autami.",
"Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
]
queries_emb = model.encode(queries, convert_to_tensor=True, show_progress_bar=False)
answers_emb = model.encode(answers, convert_to_tensor=True, show_progress_bar=False)
best_answer = cos_sim(queries_emb, answers_emb).argmax().item()
print(answers[best_answer])
# Semantic similarity example
sim_prefix = "Instruct: Retrieve semantically similar text.\nQuery: "
sentences = [
sim_prefix + "Trzeba zdrowo się odżywiać i uprawiać sport.",
sim_prefix + "Warto jest prowadzić zdrowy tryb życia, uwzględniający aktywność fizyczną i dietę.",
sim_prefix + "One should eat healthy and engage in sports.",
sim_prefix + "Zakupy potwierdzasz PINem, który bezpiecznie ustalisz podczas aktywacji."
]
emb = model.encode(sentences, convert_to_tensor=True, show_progress_bar=False)
print(cos_sim(emb, emb))
```
## Evaluation Results
The model achieves **NDCG@10** of **62.32** on the Polish Information Retrieval Benchmark. See [PIRB Leaderboard](https://huggingface.co/spaces/sdadas/pirb) for detailed results.
## Citation
```bibtex
@article{dadas2024pirb,
title={{PIRB}: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods},
author={Sławomir Dadas and Michał Perełkiewicz and Rafał Poświata},
year={2024},
eprint={2402.13350},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
<h1 align="center">Stella-PL-retrieval</h1>
This is a text encoder based on [stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) and further fine-tuned for Polish information retrieval tasks.
- In the first step, we adapted the model for Polish with [multilingual knowledge distillation method](https://aclanthology.org/2020.emnlp-main.365/) using a diverse corpus of 20 million Polish-English text pairs.
- The second step involved fine-tuning the model with contrastrive loss using a dataset consisting of 1.4 million queries. Positive and negative passages for each query have been selected with the help of [BAAI/bge-reranker-v2.5-gemma2-lightweight](https://huggingface.co/BAAI/bge-reranker-v2.5-gemma2-lightweight) reranker. The model was trained for three epochs with a batch size of 1024 queries.
The encoder transforms texts to 1024 dimensional vectors. The model is optimized specifically for Polish information retrieval tasks. If you need a more versatile encoder, suitable for a wider range of tasks such as semantic similarity or clustering, you should probably use the distilled version from the first step: [sdadas/stella-pl](https://huggingface.co/sdadas/stella-pl).
## Usage (Sentence-Transformers)
The model utilizes the same prompts as the original [stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5).
For retrieval, queries should be prefixed with **"Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: "**.
For symmetric tasks such as semantic similarity, both texts should be prefixed with **"Instruct: Retrieve semantically similar text.\nQuery: "**.
Please note that the model uses a custom implementation, so you should add `trust_remote_code=True` argument when loading it.
It is also recommended to use Flash Attention 2, which can be enabled with `attn_implementation` argument.
You can use the model like this with [sentence-transformers](https://www.SBERT.net):
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
"sdadas/stella-pl-retrieval",
trust_remote_code=True,
device="cuda",
model_kwargs={"attn_implementation": "flash_attention_2", "trust_remote_code": True}
)
model.bfloat16()
# Retrieval example
query_prefix = "Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: "
queries = [query_prefix + "Jak dożyć 100 lat?"]
answers = [
"Trzeba zdrowo się odżywiać i uprawiać sport.",
"Trzeba pić alkohol, imprezować i jeździć szybkimi autami.",
"Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
]
queries_emb = model.encode(queries, convert_to_tensor=True, show_progress_bar=False)
answers_emb = model.encode(answers, convert_to_tensor=True, show_progress_bar=False)
best_answer = cos_sim(queries_emb, answers_emb).argmax().item()
print(answers[best_answer])
# Semantic similarity example
sim_prefix = "Instruct: Retrieve semantically similar text.\nQuery: "
sentences = [
sim_prefix + "Trzeba zdrowo się odżywiać i uprawiać sport.",
sim_prefix + "Warto jest prowadzić zdrowy tryb życia, uwzględniający aktywność fizyczną i dietę.",
sim_prefix + "One should eat healthy and engage in sports.",
sim_prefix + "Zakupy potwierdzasz PINem, który bezpiecznie ustalisz podczas aktywacji."
]
emb = model.encode(sentences, convert_to_tensor=True, show_progress_bar=False)
print(cos_sim(emb, emb))
```
## Evaluation Results
The model achieves **NDCG@10** of **62.32** on the Polish Information Retrieval Benchmark. See [PIRB Leaderboard](https://huggingface.co/spaces/sdadas/pirb) for detailed results.
## Citation
```bibtex
@article{dadas2024pirb,
title={{PIRB}: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods},
author={Sławomir Dadas and Michał Perełkiewicz and Rafał Poświata},
year={2024},
eprint={2402.13350},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "pl", "license": "gemma", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "widget": [{"source_sentence": "zapytanie: Jak dożyć 100 lat?", "sentences": ["Trzeba zdrowo się odżywiać i uprawiać sport.", "Trzeba pić alkohol, imprezować i jeździć szybkimi autami.", "Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."]}]}
|
task
|
[
"SEMANTIC_SIMILARITY"
] | 45,425 |
TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ
|
TheBloke
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"finetune",
"dpo",
"Instruct",
"augmentation",
"german",
"conversational",
"en",
"de",
"dataset:argilla/distilabel-math-preference-dpo",
"base_model:fblgit/LUNA-SOLARkrautLM-Instruct",
"base_model:quantized:fblgit/LUNA-SOLARkrautLM-Instruct",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | 2023-12-23T13:02:23Z |
2023-12-23T13:36:34+00:00
| 16 | 2 |
---
base_model: fblgit/LUNA-SOLARkrautLM-Instruct
datasets:
- argilla/distilabel-math-preference-dpo
language:
- en
- de
library_name: transformers
license: cc-by-nc-4.0
model_name: Luna SOLARkrautLM Instruct
pipeline_tag: text-generation
tags:
- finetune
- dpo
- Instruct
- augmentation
- german
inference: false
model_creator: FBL
model_type: solar
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Luna SOLARkrautLM Instruct - GPTQ
- Model creator: [FBL](https://huggingface.co/fblgit)
- Original model: [Luna SOLARkrautLM Instruct](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct)
<!-- description start -->
# Description
This repo contains GPTQ model files for [FBL's Luna SOLARkrautLM Instruct](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF)
* [FBL's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 5.98 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 6.59 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 11.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 11.25 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 11.99 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 6.18 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `LUNA-SOLARkrautLM-Instruct-GPTQ`:
```shell
mkdir LUNA-SOLARkrautLM-Instruct-GPTQ
huggingface-cli download TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir LUNA-SOLARkrautLM-Instruct-GPTQ
huggingface-cli download TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir LUNA-SOLARkrautLM-Instruct-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `LUNA-SOLARkrautLM-Instruct-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: FBL's Luna SOLARkrautLM Instruct

## VAGO solutions LUNA-SOLARkrautLM-Instruct
Introducing **LUNA-SOLARkrautLM-Instruct** – a UNA-Sauerkraut version of the powerful [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) !
Aligned with **DPO** and tamed with **UNA**.
# Table of Contents
1. [Overview of all LUNA-SOLARkrautLM-Instruct models](#all-sauerkrautlm-solar-instruct-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training Dataset](#training-dataset)
- [Data Contamination Test](#data-contamination-test-results)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## Model Details
**LUNA-SOLARkrautLM-Instruct**
- **Model Type:** LUNA-SOLARkrautLM-Instruct is a UNA Model based on [fblgit/UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0) and the powerful set of [SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct/)
- **Language(s):** English, German
- **License:** cc-by-nc-4.0
- **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) [Juanako.AI - UNA](mailto:[email protected])
### Training Dataset:
LUNA-SOLARkrautLM-Instruct was trained with mix of German data augmentation and translated data.
Aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** (Our dataset do not contain any TruthfulQA prompts - check Data Contamination Test Results) and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).**
We found, that only a simple translation of training data can lead to unnatural German phrasings.
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
We improved the German language skills on this model. Nevertheless, certain formulations may occur that are not entirely correct.
### Data Contamination Test Results
Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in.
We checked our SauerkrautLM-DPO dataset with a special test [1] on this model as target model and upstage/SOLAR-10.7B-Instruct-v1.0 as reference model.
The HuggingFace team used the same methods [2, 3].
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination.
*The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.*
| Dataset | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 |
[1] https://github.com/swj0419/detect-pretrain-code-contamination
[2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
[3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
### Prompt Template:
```
<|im_start|>system
Du bist LUNA-SOLARkrautLM, ein großes Sprachmodell, das höflich und kompetent antwortet.<|im_end|>
<|im_start|>user
Wie geht es dir?<|im_end|>
<|im_start|>assistant
```
```
### User:
Hello, how are you?
### Assistant:
Hi there! I am an AI language model, so I don't have personal feelings or emotions in the traditional sense. However, I can assure you that my systems and processes are functioning well at this moment, allowing me to provide helpful responses for your queries.
How may I assist you today?
```
## Evaluation
```
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.6467|± |0.0132|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto (64)
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|-------|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7368|± |0.0149|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 25, batch_size: auto (32)
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|-------------|-------|------|-----:|--------|----:|---|-----:|
|arc_challenge|Yaml |none | 25|acc |0.692|± |0.0135|
| | |none | 25|acc_norm|0.715|± |0.0132|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto (64)
| Tasks |Version|Filter|n-shot|Metric| Value | |Stderr|
|-----------|-------|------|-----:|------|------:|---|-----:|
|paws_de |Yaml |none | 0|acc | 0.3965|± |0.0109|
|wmt16-en-de|Yaml |none | 0|bleu | 3.5784|± |0.1325|
| | |none | 0|ter |64.5707|± |0.4514|
| | |none | 0|chrf |45.7068|± |0.3861|
|xnli_de |Yaml |none | 0|acc | 0.4129|± |0.0099|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 10, batch_size: auto (32)
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|-------|------|-----:|--------|-----:|---|-----:|
|hellaswag|Yaml |none | 10|acc |0.7131|± |0.0045|
| | |none | 10|acc_norm|0.8815|± |0.0032|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 5, batch_size: auto (64)
| Tasks |Version|Filter|n-shot|Metric| Value | |Stderr|
|-----------|-------|------|-----:|------|------:|---|-----:|
|wmt16-de-en|Yaml |none | 5|bleu |14.9310|± |0.8014|
| | |none | 5|ter |46.3206|± |0.4087|
| | |none | 5|chrf |60.8637|± |0.4436|
|wmt16-en-de|Yaml |none | 5|bleu | 6.2016|± |0.2918|
| | |none | 5|ter |63.9997|± |0.4591|
| | |none | 5|chrf |51.1399|± |0.3978|
|xnli_de |Yaml |none | 5|acc | 0.4703|± |0.0100|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct,dtype=float16), gen_kwargs: (), limit: None, num_fewshot: 5, batch_size: auto (16)
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|---------------------------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.6461|± |0.1215|
| - humanities |N/A |none | 5|acc |0.5960|± |0.1200|
| - formal_logic |Yaml |none | 5|acc |0.4683|± |0.0446|
| - high_school_european_history |Yaml |none | 5|acc |0.8121|± |0.0305|
| - high_school_us_history |Yaml |none | 5|acc |0.8480|± |0.0252|
| - high_school_world_history |Yaml |none | 5|acc |0.8312|± |0.0244|
| - international_law |Yaml |none | 5|acc |0.7851|± |0.0375|
| - jurisprudence |Yaml |none | 5|acc |0.7685|± |0.0408|
| - logical_fallacies |Yaml |none | 5|acc |0.7423|± |0.0344|
| - moral_disputes |Yaml |none | 5|acc |0.7283|± |0.0239|
| - moral_scenarios |Yaml |none | 5|acc |0.3899|± |0.0163|
| - philosophy |Yaml |none | 5|acc |0.7074|± |0.0258|
| - prehistory |Yaml |none | 5|acc |0.7716|± |0.0234|
| - professional_law |Yaml |none | 5|acc |0.4824|± |0.0128|
| - world_religions |Yaml |none | 5|acc |0.7661|± |0.0325|
| - other |N/A |none | 5|acc |0.7097|± |0.0900|
| - business_ethics |Yaml |none | 5|acc |0.7700|± |0.0423|
| - clinical_knowledge |Yaml |none | 5|acc |0.6792|± |0.0287|
| - college_medicine |Yaml |none | 5|acc |0.6647|± |0.0360|
| - global_facts |Yaml |none | 5|acc |0.3600|± |0.0482|
| - human_aging |Yaml |none | 5|acc |0.6861|± |0.0311|
| - management |Yaml |none | 5|acc |0.8350|± |0.0368|
| - marketing |Yaml |none | 5|acc |0.8504|± |0.0234|
| - medical_genetics |Yaml |none | 5|acc |0.6700|± |0.0473|
| - miscellaneous |Yaml |none | 5|acc |0.7893|± |0.0146|
| - nutrition |Yaml |none | 5|acc |0.7549|± |0.0246|
| - professional_accounting |Yaml |none | 5|acc |0.5213|± |0.0298|
| - professional_medicine |Yaml |none | 5|acc |0.7353|± |0.0268|
| - virology |Yaml |none | 5|acc |0.5783|± |0.0384|
| - social_sciences |N/A |none | 5|acc |0.7501|± |0.0684|
| - econometrics |Yaml |none | 5|acc |0.5175|± |0.0470|
| - high_school_geography |Yaml |none | 5|acc |0.8485|± |0.0255|
| - high_school_government_and_politics|Yaml |none | 5|acc |0.8912|± |0.0225|
| - high_school_macroeconomics |Yaml |none | 5|acc |0.6615|± |0.0240|
| - high_school_microeconomics |Yaml |none | 5|acc |0.7311|± |0.0288|
| - high_school_psychology |Yaml |none | 5|acc |0.8385|± |0.0158|
| - human_sexuality |Yaml |none | 5|acc |0.7023|± |0.0401|
| - professional_psychology |Yaml |none | 5|acc |0.6683|± |0.0190|
| - public_relations |Yaml |none | 5|acc |0.6909|± |0.0443|
| - security_studies |Yaml |none | 5|acc |0.7633|± |0.0272|
| - sociology |Yaml |none | 5|acc |0.8358|± |0.0262|
| - us_foreign_policy |Yaml |none | 5|acc |0.8800|± |0.0327|
| - stem |N/A |none | 5|acc |0.5569|± |0.1360|
| - abstract_algebra |Yaml |none | 5|acc |0.3800|± |0.0488|
| - anatomy |Yaml |none | 5|acc |0.6148|± |0.0420|
| - astronomy |Yaml |none | 5|acc |0.7237|± |0.0364|
| - college_biology |Yaml |none | 5|acc |0.7708|± |0.0351|
| - college_chemistry |Yaml |none | 5|acc |0.4600|± |0.0501|
| - college_computer_science |Yaml |none | 5|acc |0.5400|± |0.0501|
| - college_mathematics |Yaml |none | 5|acc |0.2700|± |0.0446|
| - college_physics |Yaml |none | 5|acc |0.3333|± |0.0469|
| - computer_security |Yaml |none | 5|acc |0.7300|± |0.0446|
| - conceptual_physics |Yaml |none | 5|acc |0.6213|± |0.0317|
| - electrical_engineering |Yaml |none | 5|acc |0.6276|± |0.0403|
| - elementary_mathematics |Yaml |none | 5|acc |0.4788|± |0.0257|
| - high_school_biology |Yaml |none | 5|acc |0.8065|± |0.0225|
| - high_school_chemistry |Yaml |none | 5|acc |0.5123|± |0.0352|
| - high_school_computer_science |Yaml |none | 5|acc |0.7000|± |0.0461|
| - high_school_mathematics |Yaml |none | 5|acc |0.3889|± |0.0297|
| - high_school_physics |Yaml |none | 5|acc |0.3576|± |0.0391|
| - high_school_statistics |Yaml |none | 5|acc |0.5926|± |0.0335|
| - machine_learning |Yaml |none | 5|acc |0.4554|± |0.0473|
| Groups |Version|Filter|n-shot|Metric|Value | |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.6461|± |0.1215|
| - humanities |N/A |none | 5|acc |0.5960|± |0.1200|
| - other |N/A |none | 5|acc |0.7097|± |0.0900|
| - social_sciences|N/A |none | 5|acc |0.7501|± |0.0684|
| - stem |N/A |none | 5|acc |0.5569|± |0.1360|
```
### MT-Bench
```
########## Average ##########
score
model
gpt-4 8.990625
gpt-3.5-turbo 7.943750
claude-instant-v1 7.905660
claude-v1 7.900000
UNA-SOLAR-10.7B-Instruct-v1.0 7.521875
LUNA-SOLARkrautLM-Instruct 7.462500
vicuna-33b-v1.3 7.121875
wizardlm-30b 7.009375
Llama-2-70b-chat 6.856250
Llama-2-13b-chat 6.650000
guanaco-33b 6.528125
tulu-30b 6.434375
guanaco-65b 6.409375
oasst-sft-7-llama-30b 6.409375
palm-2-chat-bison-001 6.400000
mpt-30b-chat 6.393750
vicuna-13b-v1.3 6.387500
wizardlm-13b 6.353125
Llama-2-7b-chat 6.268750
vicuna-7b-v1.3 5.996875
baize-v2-13b 5.750000
nous-hermes-13b 5.553459
mpt-7b-chat 5.459119
gpt4all-13b-snoozy 5.452830
koala-13b 5.350000
mpt-30b-instruct 5.218750
falcon-40b-instruct 5.168750
h2ogpt-oasst-open-llama-13b 4.625000
alpaca-13b 4.531250
chatglm-6b 4.500000
oasst-sft-4-pythia-12b 4.318750
rwkv-4-raven-14b 3.984375
dolly-v2-12b 3.275000
fastchat-t5-3b 3.040625
stablelm-tuned-alpha-7b 2.753125
llama-13b 2.606250
```
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startup, [VAGO Solutions](https://huggingface.co/VAGOsolutions), where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
[Juanako.AI](https://huggingface.co/fblgit) is also seeking support and investment for our startup, we also are open for collaborating with other labs to make awesome models like this one.
## Acknowledgement
Big Hug to [VAGO Solutions](https://huggingface.co/VAGOsolutions), we merely used our UNA transformers library on their code and dataset, nothing else. This won't be possible without them, thanks!
Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to [upstage](https://huggingface.co/upstage) for providing the open source community with their latest technology!
| null |
Non_BioNLP
|
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Luna SOLARkrautLM Instruct - GPTQ
- Model creator: [FBL](https://huggingface.co/fblgit)
- Original model: [Luna SOLARkrautLM Instruct](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct)
<!-- description start -->
# Description
This repo contains GPTQ model files for [FBL's Luna SOLARkrautLM Instruct](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF)
* [FBL's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 5.98 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 6.59 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 11.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 11.25 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 11.99 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 2048 | 6.18 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `LUNA-SOLARkrautLM-Instruct-GPTQ`:
```shell
mkdir LUNA-SOLARkrautLM-Instruct-GPTQ
huggingface-cli download TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir LUNA-SOLARkrautLM-Instruct-GPTQ
huggingface-cli download TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir LUNA-SOLARkrautLM-Instruct-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir LUNA-SOLARkrautLM-Instruct-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `LUNA-SOLARkrautLM-Instruct-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: FBL's Luna SOLARkrautLM Instruct

## VAGO solutions LUNA-SOLARkrautLM-Instruct
Introducing **LUNA-SOLARkrautLM-Instruct** – a UNA-Sauerkraut version of the powerful [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) !
Aligned with **DPO** and tamed with **UNA**.
# Table of Contents
1. [Overview of all LUNA-SOLARkrautLM-Instruct models](#all-sauerkrautlm-solar-instruct-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training Dataset](#training-dataset)
- [Data Contamination Test](#data-contamination-test-results)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## Model Details
**LUNA-SOLARkrautLM-Instruct**
- **Model Type:** LUNA-SOLARkrautLM-Instruct is a UNA Model based on [fblgit/UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0) and the powerful set of [SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct/)
- **Language(s):** English, German
- **License:** cc-by-nc-4.0
- **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) [Juanako.AI - UNA](mailto:[email protected])
### Training Dataset:
LUNA-SOLARkrautLM-Instruct was trained with mix of German data augmentation and translated data.
Aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** (Our dataset do not contain any TruthfulQA prompts - check Data Contamination Test Results) and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).**
We found, that only a simple translation of training data can lead to unnatural German phrasings.
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
We improved the German language skills on this model. Nevertheless, certain formulations may occur that are not entirely correct.
### Data Contamination Test Results
Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in.
We checked our SauerkrautLM-DPO dataset with a special test [1] on this model as target model and upstage/SOLAR-10.7B-Instruct-v1.0 as reference model.
The HuggingFace team used the same methods [2, 3].
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination.
*The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.*
| Dataset | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 |
[1] https://github.com/swj0419/detect-pretrain-code-contamination
[2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
[3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
### Prompt Template:
```
<|im_start|>system
Du bist LUNA-SOLARkrautLM, ein großes Sprachmodell, das höflich und kompetent antwortet.<|im_end|>
<|im_start|>user
Wie geht es dir?<|im_end|>
<|im_start|>assistant
```
```
### User:
Hello, how are you?
### Assistant:
Hi there! I am an AI language model, so I don't have personal feelings or emotions in the traditional sense. However, I can assure you that my systems and processes are functioning well at this moment, allowing me to provide helpful responses for your queries.
How may I assist you today?
```
## Evaluation
```
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.6467|± |0.0132|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto (64)
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|-------|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7368|± |0.0149|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 25, batch_size: auto (32)
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|-------------|-------|------|-----:|--------|----:|---|-----:|
|arc_challenge|Yaml |none | 25|acc |0.692|± |0.0135|
| | |none | 25|acc_norm|0.715|± |0.0132|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto (64)
| Tasks |Version|Filter|n-shot|Metric| Value | |Stderr|
|-----------|-------|------|-----:|------|------:|---|-----:|
|paws_de |Yaml |none | 0|acc | 0.3965|± |0.0109|
|wmt16-en-de|Yaml |none | 0|bleu | 3.5784|± |0.1325|
| | |none | 0|ter |64.5707|± |0.4514|
| | |none | 0|chrf |45.7068|± |0.3861|
|xnli_de |Yaml |none | 0|acc | 0.4129|± |0.0099|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 10, batch_size: auto (32)
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|-------|------|-----:|--------|-----:|---|-----:|
|hellaswag|Yaml |none | 10|acc |0.7131|± |0.0045|
| | |none | 10|acc_norm|0.8815|± |0.0032|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 5, batch_size: auto (64)
| Tasks |Version|Filter|n-shot|Metric| Value | |Stderr|
|-----------|-------|------|-----:|------|------:|---|-----:|
|wmt16-de-en|Yaml |none | 5|bleu |14.9310|± |0.8014|
| | |none | 5|ter |46.3206|± |0.4087|
| | |none | 5|chrf |60.8637|± |0.4436|
|wmt16-en-de|Yaml |none | 5|bleu | 6.2016|± |0.2918|
| | |none | 5|ter |63.9997|± |0.4591|
| | |none | 5|chrf |51.1399|± |0.3978|
|xnli_de |Yaml |none | 5|acc | 0.4703|± |0.0100|
hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct,dtype=float16), gen_kwargs: (), limit: None, num_fewshot: 5, batch_size: auto (16)
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|---------------------------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.6461|± |0.1215|
| - humanities |N/A |none | 5|acc |0.5960|± |0.1200|
| - formal_logic |Yaml |none | 5|acc |0.4683|± |0.0446|
| - high_school_european_history |Yaml |none | 5|acc |0.8121|± |0.0305|
| - high_school_us_history |Yaml |none | 5|acc |0.8480|± |0.0252|
| - high_school_world_history |Yaml |none | 5|acc |0.8312|± |0.0244|
| - international_law |Yaml |none | 5|acc |0.7851|± |0.0375|
| - jurisprudence |Yaml |none | 5|acc |0.7685|± |0.0408|
| - logical_fallacies |Yaml |none | 5|acc |0.7423|± |0.0344|
| - moral_disputes |Yaml |none | 5|acc |0.7283|± |0.0239|
| - moral_scenarios |Yaml |none | 5|acc |0.3899|± |0.0163|
| - philosophy |Yaml |none | 5|acc |0.7074|± |0.0258|
| - prehistory |Yaml |none | 5|acc |0.7716|± |0.0234|
| - professional_law |Yaml |none | 5|acc |0.4824|± |0.0128|
| - world_religions |Yaml |none | 5|acc |0.7661|± |0.0325|
| - other |N/A |none | 5|acc |0.7097|± |0.0900|
| - business_ethics |Yaml |none | 5|acc |0.7700|± |0.0423|
| - clinical_knowledge |Yaml |none | 5|acc |0.6792|± |0.0287|
| - college_medicine |Yaml |none | 5|acc |0.6647|± |0.0360|
| - global_facts |Yaml |none | 5|acc |0.3600|± |0.0482|
| - human_aging |Yaml |none | 5|acc |0.6861|± |0.0311|
| - management |Yaml |none | 5|acc |0.8350|± |0.0368|
| - marketing |Yaml |none | 5|acc |0.8504|± |0.0234|
| - medical_genetics |Yaml |none | 5|acc |0.6700|± |0.0473|
| - miscellaneous |Yaml |none | 5|acc |0.7893|± |0.0146|
| - nutrition |Yaml |none | 5|acc |0.7549|± |0.0246|
| - professional_accounting |Yaml |none | 5|acc |0.5213|± |0.0298|
| - professional_medicine |Yaml |none | 5|acc |0.7353|± |0.0268|
| - virology |Yaml |none | 5|acc |0.5783|± |0.0384|
| - social_sciences |N/A |none | 5|acc |0.7501|± |0.0684|
| - econometrics |Yaml |none | 5|acc |0.5175|± |0.0470|
| - high_school_geography |Yaml |none | 5|acc |0.8485|± |0.0255|
| - high_school_government_and_politics|Yaml |none | 5|acc |0.8912|± |0.0225|
| - high_school_macroeconomics |Yaml |none | 5|acc |0.6615|± |0.0240|
| - high_school_microeconomics |Yaml |none | 5|acc |0.7311|± |0.0288|
| - high_school_psychology |Yaml |none | 5|acc |0.8385|± |0.0158|
| - human_sexuality |Yaml |none | 5|acc |0.7023|± |0.0401|
| - professional_psychology |Yaml |none | 5|acc |0.6683|± |0.0190|
| - public_relations |Yaml |none | 5|acc |0.6909|± |0.0443|
| - security_studies |Yaml |none | 5|acc |0.7633|± |0.0272|
| - sociology |Yaml |none | 5|acc |0.8358|± |0.0262|
| - us_foreign_policy |Yaml |none | 5|acc |0.8800|± |0.0327|
| - stem |N/A |none | 5|acc |0.5569|± |0.1360|
| - abstract_algebra |Yaml |none | 5|acc |0.3800|± |0.0488|
| - anatomy |Yaml |none | 5|acc |0.6148|± |0.0420|
| - astronomy |Yaml |none | 5|acc |0.7237|± |0.0364|
| - college_biology |Yaml |none | 5|acc |0.7708|± |0.0351|
| - college_chemistry |Yaml |none | 5|acc |0.4600|± |0.0501|
| - college_computer_science |Yaml |none | 5|acc |0.5400|± |0.0501|
| - college_mathematics |Yaml |none | 5|acc |0.2700|± |0.0446|
| - college_physics |Yaml |none | 5|acc |0.3333|± |0.0469|
| - computer_security |Yaml |none | 5|acc |0.7300|± |0.0446|
| - conceptual_physics |Yaml |none | 5|acc |0.6213|± |0.0317|
| - electrical_engineering |Yaml |none | 5|acc |0.6276|± |0.0403|
| - elementary_mathematics |Yaml |none | 5|acc |0.4788|± |0.0257|
| - high_school_biology |Yaml |none | 5|acc |0.8065|± |0.0225|
| - high_school_chemistry |Yaml |none | 5|acc |0.5123|± |0.0352|
| - high_school_computer_science |Yaml |none | 5|acc |0.7000|± |0.0461|
| - high_school_mathematics |Yaml |none | 5|acc |0.3889|± |0.0297|
| - high_school_physics |Yaml |none | 5|acc |0.3576|± |0.0391|
| - high_school_statistics |Yaml |none | 5|acc |0.5926|± |0.0335|
| - machine_learning |Yaml |none | 5|acc |0.4554|± |0.0473|
| Groups |Version|Filter|n-shot|Metric|Value | |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.6461|± |0.1215|
| - humanities |N/A |none | 5|acc |0.5960|± |0.1200|
| - other |N/A |none | 5|acc |0.7097|± |0.0900|
| - social_sciences|N/A |none | 5|acc |0.7501|± |0.0684|
| - stem |N/A |none | 5|acc |0.5569|± |0.1360|
```
### MT-Bench
```
########## Average ##########
score
model
gpt-4 8.990625
gpt-3.5-turbo 7.943750
claude-instant-v1 7.905660
claude-v1 7.900000
UNA-SOLAR-10.7B-Instruct-v1.0 7.521875
LUNA-SOLARkrautLM-Instruct 7.462500
vicuna-33b-v1.3 7.121875
wizardlm-30b 7.009375
Llama-2-70b-chat 6.856250
Llama-2-13b-chat 6.650000
guanaco-33b 6.528125
tulu-30b 6.434375
guanaco-65b 6.409375
oasst-sft-7-llama-30b 6.409375
palm-2-chat-bison-001 6.400000
mpt-30b-chat 6.393750
vicuna-13b-v1.3 6.387500
wizardlm-13b 6.353125
Llama-2-7b-chat 6.268750
vicuna-7b-v1.3 5.996875
baize-v2-13b 5.750000
nous-hermes-13b 5.553459
mpt-7b-chat 5.459119
gpt4all-13b-snoozy 5.452830
koala-13b 5.350000
mpt-30b-instruct 5.218750
falcon-40b-instruct 5.168750
h2ogpt-oasst-open-llama-13b 4.625000
alpaca-13b 4.531250
chatglm-6b 4.500000
oasst-sft-4-pythia-12b 4.318750
rwkv-4-raven-14b 3.984375
dolly-v2-12b 3.275000
fastchat-t5-3b 3.040625
stablelm-tuned-alpha-7b 2.753125
llama-13b 2.606250
```
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startup, [VAGO Solutions](https://huggingface.co/VAGOsolutions), where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
[Juanako.AI](https://huggingface.co/fblgit) is also seeking support and investment for our startup, we also are open for collaborating with other labs to make awesome models like this one.
## Acknowledgement
Big Hug to [VAGO Solutions](https://huggingface.co/VAGOsolutions), we merely used our UNA transformers library on their code and dataset, nothing else. This won't be possible without them, thanks!
Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to [upstage](https://huggingface.co/upstage) for providing the open source community with their latest technology!
|
{"base_model": "fblgit/LUNA-SOLARkrautLM-Instruct", "datasets": ["argilla/distilabel-math-preference-dpo"], "language": ["en", "de"], "library_name": "transformers", "license": "cc-by-nc-4.0", "model_name": "Luna SOLARkrautLM Instruct", "pipeline_tag": "text-generation", "tags": ["finetune", "dpo", "Instruct", "augmentation", "german"], "inference": false, "model_creator": "FBL", "model_type": "solar", "prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n", "quantized_by": "TheBloke"}
|
task
|
[
"TRANSLATION"
] | 45,426 |
klcsp/llama3-8b-lora-summarization-11-v1
|
klcsp
| null |
[
"peft",
"tensorboard",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | 2024-11-17T23:51:41Z |
2024-11-18T11:22:36+00:00
| 1 | 0 |
---
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- generator
library_name: peft
license: llama3
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama3-8b-lora-summarization-11-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-lora-summarization-11-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6751 | 0.9955 | 111 | 2.3661 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.2
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-lora-summarization-11-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6751 | 0.9955 | 111 | 2.3661 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.2
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
{"base_model": "meta-llama/Meta-Llama-3-8B", "datasets": ["generator"], "library_name": "peft", "license": "llama3", "tags": ["trl", "sft", "generated_from_trainer"], "model-index": [{"name": "llama3-8b-lora-summarization-11-v1", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,427 |
trungnguyentran/t5-base-en2vi
|
trungnguyentran
|
translation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"en",
"vi",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-11-19T03:14:52Z |
2023-12-03T14:29:33+00:00
| 109 | 0 |
---
language:
- en
- vi
metrics:
- bleu
pipeline_tag: translation
widget:
- text: However, natural intelligence is not limited to just a single modality. Humans
can read and write text. We can see images and watch videos. We listen to music
to relax and watch out for strange noises to detect danger. Being able to work
with multimodal data is essential for us or any AI to operate in the real world.
output:
text: Tuy nhiên, trí tuệ tự nhiên không chỉ giới hạn ở một phương thức duy nhất.
Con người có thể đọc và viết văn bản. Chúng ta có thể xem hình ảnh và xem video.
Chúng ta nghe nhạc để thư giãn và cảnh giác với những tiếng ồn lạ để phát hiện
nguy hiểm. Có thể làm việc với dữ liệu đa phương thức là điều cần thiết cho
chúng ta hoặc bất kỳ AI nào để hoạt động trong thế giới thực.
---
Pretrain with corpus from vietnews and trained with more than 200 ebook-pairs, TED talks and OpenSubtitles
| Epoch | Training Loss | Validation Loss | Bleu | Gen Len |
|-------|---------------|------------------|-----------|------------|
| 1 | No log | 0.222610 | 25.757700 | 15.431200 |
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
checkpoint = "trungnguyentran/t5-base-en2vi"
tokenizer = AutoTokenizer.from_pretrained(checkpoint, device_map="auto")
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto")
inputs = ['However, natural intelligence is not limited to just a single modality. Humans can read and write text. We can see images and watch videos. We listen to music to relax and watch out for strange noises to detect danger. Being able to work with multimodal data is essential for us or any AI to operate in the real world.',
'Not all multimodal systems are LMMs. For example, text-to-image models like Midjourney, Stable Diffusion, and Dall-E are multimodal but don’t have a language model component. Multimodal can mean one or more of the following:',
"A question that I’ve been asked a lot recently is how large language models (LLMs) will change machine learning workflows."]
outputs = model.generate(tokenizer(inputs, return_tensors="pt", padding=True).input_ids, max_length=512)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
# 'Tuy nhiên, trí thông minh tự nhiên không chỉ giới hạn ở một phương thức duy nhất. Con người có thể đọc và viết văn bản. Chúng ta có thể nhìn hình ảnh và xem video. Chúng ta nghe nhạc để thư giãn và theo dõi những tiếng ồn lạ lùng để phát hiện nguy hiểm. Có thể làm việc với dữ liệu đa phương thức là điều cần thiết cho chúng ta hoặc bất kỳ AI nào để hoạt động trong thế giới thực.',
# 'Không phải tất cả các hệ thống đa phương thức đều là các LMM. Ví dụ, các mô hình văn bản đến hình ảnh như Midjourney, Stable Diffusion và Dall-E là đa phương thức nhưng không có thành phần mô hình ngôn ngữ.',
# 'Câu hỏi mà tôi được hỏi rất nhiều gần đây là các mô hình ngôn ngữ lớn (LLM) sẽ thay đổi quy trình học máy như thế nào.'
```
| null |
Non_BioNLP
|
Pretrain with corpus from vietnews and trained with more than 200 ebook-pairs, TED talks and OpenSubtitles
| Epoch | Training Loss | Validation Loss | Bleu | Gen Len |
|-------|---------------|------------------|-----------|------------|
| 1 | No log | 0.222610 | 25.757700 | 15.431200 |
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
checkpoint = "trungnguyentran/t5-base-en2vi"
tokenizer = AutoTokenizer.from_pretrained(checkpoint, device_map="auto")
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto")
inputs = ['However, natural intelligence is not limited to just a single modality. Humans can read and write text. We can see images and watch videos. We listen to music to relax and watch out for strange noises to detect danger. Being able to work with multimodal data is essential for us or any AI to operate in the real world.',
'Not all multimodal systems are LMMs. For example, text-to-image models like Midjourney, Stable Diffusion, and Dall-E are multimodal but don’t have a language model component. Multimodal can mean one or more of the following:',
"A question that I’ve been asked a lot recently is how large language models (LLMs) will change machine learning workflows."]
outputs = model.generate(tokenizer(inputs, return_tensors="pt", padding=True).input_ids, max_length=512)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
# 'Tuy nhiên, trí thông minh tự nhiên không chỉ giới hạn ở một phương thức duy nhất. Con người có thể đọc và viết văn bản. Chúng ta có thể nhìn hình ảnh và xem video. Chúng ta nghe nhạc để thư giãn và theo dõi những tiếng ồn lạ lùng để phát hiện nguy hiểm. Có thể làm việc với dữ liệu đa phương thức là điều cần thiết cho chúng ta hoặc bất kỳ AI nào để hoạt động trong thế giới thực.',
# 'Không phải tất cả các hệ thống đa phương thức đều là các LMM. Ví dụ, các mô hình văn bản đến hình ảnh như Midjourney, Stable Diffusion và Dall-E là đa phương thức nhưng không có thành phần mô hình ngôn ngữ.',
# 'Câu hỏi mà tôi được hỏi rất nhiều gần đây là các mô hình ngôn ngữ lớn (LLM) sẽ thay đổi quy trình học máy như thế nào.'
```
|
{"language": ["en", "vi"], "metrics": ["bleu"], "pipeline_tag": "translation", "widget": [{"text": "However, natural intelligence is not limited to just a single modality. Humans can read and write text. We can see images and watch videos. We listen to music to relax and watch out for strange noises to detect danger. Being able to work with multimodal data is essential for us or any AI to operate in the real world.", "output": {"text": "Tuy nhiên, trí tuệ tự nhiên không chỉ giới hạn ở một phương thức duy nhất. Con người có thể đọc và viết văn bản. Chúng ta có thể xem hình ảnh và xem video. Chúng ta nghe nhạc để thư giãn và cảnh giác với những tiếng ồn lạ để phát hiện nguy hiểm. Có thể làm việc với dữ liệu đa phương thức là điều cần thiết cho chúng ta hoặc bất kỳ AI nào để hoạt động trong thế giới thực."}}]}
|
task
|
[
"TRANSLATION"
] | 45,428 |
caspro/mbart-large-50_Nepali_News_Summarization_QLoRA_8bit
|
caspro
|
summarization
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"summarization",
"ne",
"base_model:facebook/mbart-large-50",
"base_model:adapter:facebook/mbart-large-50",
"license:mit",
"region:us"
] | 2024-07-20T15:29:11Z |
2025-01-19T05:26:20+00:00
| 13 | 0 |
---
base_model: facebook/mbart-large-50
language:
- ne
library_name: peft
license: mit
metrics:
- rouge
pipeline_tag: summarization
tags:
- generated_from_trainer
model-index:
- name: mbart-large-50_Nepali_News_Summarization_QLoRA_8bit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50_Nepali_News_Summarization_QLoRA_8bit
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3724
- Rouge-1 R: 0.3809
- Rouge-1 P: 0.3877
- Rouge-1 F: 0.3745
- Rouge-2 R: 0.2144
- Rouge-2 P: 0.2176
- Rouge-2 F: 0.2093
- Rouge-l R: 0.3702
- Rouge-l P: 0.3766
- Rouge-l F: 0.364
- Gen Len: 14.0747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## How to use?
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from peft import get_peft_model, PeftModel
import torch
model_name = 'caspro/mbart-large-50_Nepali_News_Summarization_QLoRA_4bit'
base_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50", load_in_8bit=True)
# Load the tokenizer and model from the Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
prefix = "सारांशमा: "
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples["text"]]
model_inputs = tokenizer(inputs, max_length=1024, truncation=True)
#tokenizer.set_tgt_lang_special_tokens('ne_NP')
labels = tokenizer(text_target=examples["title"], max_length=20, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
text = 'सारांशमा: नेपालमा उपलब्ध कुल विद्युतको एक चौथाइभन्दा बढी विद्युत प्राविधिक र अप्राविधिक रुपमा चुहावट हुने तथ्यका माझ त्यसलाई नियन्त्रण गर्न प्राधिकरणले एउटा समिति बनाएर अनुसन्धान पनि थालेको थियो। \n\nमहानगरीय प्रहरी अपराध महाशाखाका अनुसार नेपाल विद्युत प्राधिकरणका कतिपय कर्मचारीको टोलीले गाडीमा आवश्यक सबै उपकरणहरु बोकेर ग्राहकको घरघरमा पुगेर विद्युत चोर्नमा सघाउने गरेको भेटिएको हो। \n\nकडाइ\n\nआइतबार प्राधिकरणका वर्तमान र पूर्व कर्मचारी गरी १२ जना तथा चारजना व्यापारीलाई पक्राउ गरेको प्रहरी अपराध महाशाखाले प्रारम्भिक अनुसन्धानमा चोरीको गिरोहको आकार अझ ठूलो हुन सक्ने जनाएको छ। \n\nमहाशाखाका प्रमुख सर्वेन्द्र खनालले भने, "अहिलेसम्मको प्रारम्भिक अनुसन्धानमा केही कलकारखाना, केही उद्योगहरु पनि चोरीमा संलग्न भएको देखिन्छ। यसभन्दा बाहेक बाँकी अरुपनि छन्। तिनीहरुलाई हामी जतिसक्दो चाँडो कानुनको दायरामा ल्याउँछौं।" \n\nनेपाल विद्युत प्राधिकरणका अनुसार नेपालमा हाल उपलब्ध कुल विद्युतको करिब २६ प्रतिशत विद्युत चुहावट हुने गर्दछ। \n\nत्यसमा १२ प्रतिशत प्राविधिक तथा १४ प्रतिशत भन्दा बढी अप्राविधिक हुने गरेको छ। \n\nकमसल खालको विद्युतीय सामाग्री गर्दा हुने चुहावट प्राविधिक हो।\n\nमिटरमा कम खपत देखाउने गरी विद्युत चोरी भए त्यो चाहिँ अप्राविधिक चुहावटमा पर्छ। \n\nप्रयास\n\nचोरी नियन्त्रण गर्न उर्जा मन्त्रालयले छुट्टै समिति पनि गठन गरिएको छ। \n\nचोरी नियन्त्रणको अहिले थालिएको अभियानमा नेपाल विद्युत प्राधिकरण र उर्जा मन्त्रालयले सघाएको पनि प्रहरीले जनाएको छ। \n\nपक्राउ गरिएकाहरुलाई ठगी मुद्दा लगाइएको छ।\n\nतर उनीहरुलाई विद्युत चोरी ऐन जस्ता आवश्यक ऐन अन्तर्गत कारबाही अगाडि बढाउन सक्ने महाशाखा प्रमुख तथा एसएसपी खनाल बताउँछन्। \n\nउनले भने, "यसमा धेरै पक्षको संलग्नता भएकोले एकैथरी कानुनबाट सम्बोधन नहुन सक्छ। तर सबैजना ठगीसँग सम्बन्धित हुने भएकोले यो कानुनले समेट्छ। त्यही अनुरुप नै हामीले अनुसन्धान शुरु गरेका छौं"। \n\nविद्युत चोरी गर्नेमा सर्वसाधारण उद्योगीहरु र व्यापारीहरु रहेको बताइएको छ।\n\nविद्युतको चोरी र चुहावट रोक्ने भनिदैं आएपनि हालसम्म त्यो प्रभावकारी देखिएको छैन। \n\n'
lora_model = PeftModel.from_pretrained(base_model, model_name)
# Assuming you have a GPU available, move the model to the GPU
if torch.cuda.is_available():
device = torch.device("cuda")
lora_model.to(device)
def generate_summary(text):
inputs = tokenizer(text, return_tensors="pt", max_length=1024, truncation=True)
# Move the input tensors to the same device as the model
if torch.cuda.is_available():
inputs = inputs.to(device)
summary_ids = lora_model.generate(inputs['input_ids'], num_beams=4, max_length=128, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
summary = generate_summary(prefix + text)
summary
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 R | Rouge-1 P | Rouge-1 F | Rouge-2 R | Rouge-2 P | Rouge-2 F | Rouge-l R | Rouge-l P | Rouge-l F | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------:|
| 1.5604 | 1.0 | 10191 | 1.5916 | 0.3605 | 0.3694 | 0.3536 | 0.1948 | 0.2008 | 0.19 | 0.3501 | 0.3586 | 0.3433 | 14.7262 |
| 1.5482 | 2.0 | 20382 | 1.3992 | 0.3673 | 0.3879 | 0.3672 | 0.2034 | 0.2149 | 0.202 | 0.3577 | 0.3775 | 0.3575 | 13.7928 |
| 1.2397 | 3.0 | 30573 | 1.3724 | 0.3809 | 0.3877 | 0.3745 | 0.2144 | 0.2176 | 0.2093 | 0.3702 | 0.3766 | 0.364 | 14.0747 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50_Nepali_News_Summarization_QLoRA_8bit
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3724
- Rouge-1 R: 0.3809
- Rouge-1 P: 0.3877
- Rouge-1 F: 0.3745
- Rouge-2 R: 0.2144
- Rouge-2 P: 0.2176
- Rouge-2 F: 0.2093
- Rouge-l R: 0.3702
- Rouge-l P: 0.3766
- Rouge-l F: 0.364
- Gen Len: 14.0747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## How to use?
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from peft import get_peft_model, PeftModel
import torch
model_name = 'caspro/mbart-large-50_Nepali_News_Summarization_QLoRA_4bit'
base_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50", load_in_8bit=True)
# Load the tokenizer and model from the Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
prefix = "सारांशमा: "
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples["text"]]
model_inputs = tokenizer(inputs, max_length=1024, truncation=True)
#tokenizer.set_tgt_lang_special_tokens('ne_NP')
labels = tokenizer(text_target=examples["title"], max_length=20, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
text = 'सारांशमा: नेपालमा उपलब्ध कुल विद्युतको एक चौथाइभन्दा बढी विद्युत प्राविधिक र अप्राविधिक रुपमा चुहावट हुने तथ्यका माझ त्यसलाई नियन्त्रण गर्न प्राधिकरणले एउटा समिति बनाएर अनुसन्धान पनि थालेको थियो। \n\nमहानगरीय प्रहरी अपराध महाशाखाका अनुसार नेपाल विद्युत प्राधिकरणका कतिपय कर्मचारीको टोलीले गाडीमा आवश्यक सबै उपकरणहरु बोकेर ग्राहकको घरघरमा पुगेर विद्युत चोर्नमा सघाउने गरेको भेटिएको हो। \n\nकडाइ\n\nआइतबार प्राधिकरणका वर्तमान र पूर्व कर्मचारी गरी १२ जना तथा चारजना व्यापारीलाई पक्राउ गरेको प्रहरी अपराध महाशाखाले प्रारम्भिक अनुसन्धानमा चोरीको गिरोहको आकार अझ ठूलो हुन सक्ने जनाएको छ। \n\nमहाशाखाका प्रमुख सर्वेन्द्र खनालले भने, "अहिलेसम्मको प्रारम्भिक अनुसन्धानमा केही कलकारखाना, केही उद्योगहरु पनि चोरीमा संलग्न भएको देखिन्छ। यसभन्दा बाहेक बाँकी अरुपनि छन्। तिनीहरुलाई हामी जतिसक्दो चाँडो कानुनको दायरामा ल्याउँछौं।" \n\nनेपाल विद्युत प्राधिकरणका अनुसार नेपालमा हाल उपलब्ध कुल विद्युतको करिब २६ प्रतिशत विद्युत चुहावट हुने गर्दछ। \n\nत्यसमा १२ प्रतिशत प्राविधिक तथा १४ प्रतिशत भन्दा बढी अप्राविधिक हुने गरेको छ। \n\nकमसल खालको विद्युतीय सामाग्री गर्दा हुने चुहावट प्राविधिक हो।\n\nमिटरमा कम खपत देखाउने गरी विद्युत चोरी भए त्यो चाहिँ अप्राविधिक चुहावटमा पर्छ। \n\nप्रयास\n\nचोरी नियन्त्रण गर्न उर्जा मन्त्रालयले छुट्टै समिति पनि गठन गरिएको छ। \n\nचोरी नियन्त्रणको अहिले थालिएको अभियानमा नेपाल विद्युत प्राधिकरण र उर्जा मन्त्रालयले सघाएको पनि प्रहरीले जनाएको छ। \n\nपक्राउ गरिएकाहरुलाई ठगी मुद्दा लगाइएको छ।\n\nतर उनीहरुलाई विद्युत चोरी ऐन जस्ता आवश्यक ऐन अन्तर्गत कारबाही अगाडि बढाउन सक्ने महाशाखा प्रमुख तथा एसएसपी खनाल बताउँछन्। \n\nउनले भने, "यसमा धेरै पक्षको संलग्नता भएकोले एकैथरी कानुनबाट सम्बोधन नहुन सक्छ। तर सबैजना ठगीसँग सम्बन्धित हुने भएकोले यो कानुनले समेट्छ। त्यही अनुरुप नै हामीले अनुसन्धान शुरु गरेका छौं"। \n\nविद्युत चोरी गर्नेमा सर्वसाधारण उद्योगीहरु र व्यापारीहरु रहेको बताइएको छ।\n\nविद्युतको चोरी र चुहावट रोक्ने भनिदैं आएपनि हालसम्म त्यो प्रभावकारी देखिएको छैन। \n\n'
lora_model = PeftModel.from_pretrained(base_model, model_name)
# Assuming you have a GPU available, move the model to the GPU
if torch.cuda.is_available():
device = torch.device("cuda")
lora_model.to(device)
def generate_summary(text):
inputs = tokenizer(text, return_tensors="pt", max_length=1024, truncation=True)
# Move the input tensors to the same device as the model
if torch.cuda.is_available():
inputs = inputs.to(device)
summary_ids = lora_model.generate(inputs['input_ids'], num_beams=4, max_length=128, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
summary = generate_summary(prefix + text)
summary
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 R | Rouge-1 P | Rouge-1 F | Rouge-2 R | Rouge-2 P | Rouge-2 F | Rouge-l R | Rouge-l P | Rouge-l F | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------:|
| 1.5604 | 1.0 | 10191 | 1.5916 | 0.3605 | 0.3694 | 0.3536 | 0.1948 | 0.2008 | 0.19 | 0.3501 | 0.3586 | 0.3433 | 14.7262 |
| 1.5482 | 2.0 | 20382 | 1.3992 | 0.3673 | 0.3879 | 0.3672 | 0.2034 | 0.2149 | 0.202 | 0.3577 | 0.3775 | 0.3575 | 13.7928 |
| 1.2397 | 3.0 | 30573 | 1.3724 | 0.3809 | 0.3877 | 0.3745 | 0.2144 | 0.2176 | 0.2093 | 0.3702 | 0.3766 | 0.364 | 14.0747 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{"base_model": "facebook/mbart-large-50", "language": ["ne"], "library_name": "peft", "license": "mit", "metrics": ["rouge"], "pipeline_tag": "summarization", "tags": ["generated_from_trainer"], "model-index": [{"name": "mbart-large-50_Nepali_News_Summarization_QLoRA_8bit", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,429 |
kawinduwijewardhane/text-summarization-AI
|
kawinduwijewardhane
|
summarization
|
[
"transformers",
"summarization",
"endpoints_compatible",
"region:us"
] | 2023-08-20T12:12:05Z |
2023-08-20T12:12:45+00:00
| 0 | 0 |
---
library_name: transformers
pipeline_tag: summarization
---
| null |
Non_BioNLP
|
{"library_name": "transformers", "pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 45,430 |
|
quantisan/bge-small-en-v1.5-93dataset
|
quantisan
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"region:us"
] | 2024-10-10T04:58:09Z |
2024-10-10T04:58:13+00:00
| 4 | 0 |
---
base_model: BAAI/bge-small-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'metrics.statistics.average letter per word: 4.83, 4.41, 4.65, 4.77, 4.56,
4.36, 4.94, 4.78, 4.68, 4.61, 4.92, 4.6, 4.59, 4.95, 4.88, 4.63, 4.9, 4.31, 4.81,
4.8'
- text: 'company.relationship: founder, None, founder/chairman, Relation, relation,
CEO, chairman, investor, founder and CEO, founder/CEO, owner, chairman of management
committee, founder and chairman, Chairman and Chief Executive Officer, general
director, executive chairman, Chairman/founder, founder, chairman, ceo, former
chairman and CEO, relation and chairman'
- text: 'variety: Western, Eastern'
- text: 'Data.Cholesterol: 14, 8, 12, 5, 2, 3, 4, 11, 15, 29, 34, 0, 6, 10, 13, 16,
17, 7, 59, 20'
- text: 'Date.Full: 8/26/1990, 3/24/1991, 3/31/1991, 4/7/1991, 4/14/1991, 4/21/1991,
4/28/1991, 5/5/1991, 5/12/1991, 5/19/1991, 5/26/1991, 6/2/1991, 6/9/1991, 6/16/1991,
6/23/1991, 6/30/1991, 7/7/1991, 7/14/1991, 7/21/1991, 7/28/1991'
inference: true
model-index:
- name: SetFit with BAAI/bge-small-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.6705461056401074
name: Accuracy
---
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 43 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Latitude | <ul><li>'Latitude: 48,87217700, 48,85543800, 48,87416100, 48,87322500, 48,87422500, 48,84189000, 48,86617200, 48,87112100, 48,86552200, 48,87623100, 48,85609000, 48,85642700, 48,86853300, 48,87465400, 48,86995000, 48,85654000, 48,87022000, 48,86962600, 48,85663200, 48,83476200'</li><li>'lat: 40.7940823884086, 40.7948509408039, 40.7667178072558, 40.7697032606755, 40.797533370163, 40.7902561000937, 40.7693045133578, 40.7942883045566, 40.7729752391435, 40.7903128889029, 40.7762126854894, 40.7725908847499, 40.7931811701082, 40.7917367820255, 40.7829723919744, 40.7742879599026, 40.7823507678183, 40.7919669739962, 40.7702795904962, 40.7698124821507'</li><li>'lat: 83.92115933668057, 89.53277415300325, 85.37696959908148, 85.44622332365381, 84.28538158324413, 87.96664079539569, 86.11414393337242, 85.43864590316868, 87.65474214915454, 81.67725407101064, 90.47817498708324, 89.87993043195812, 81.56791356025577, 88.48808747114165, 89.3843538611984, 87.5218603199103, 83.99238693700401, 82.50195719071465, 85.84865551792468, 87.92121711225418'</li></ul> |
| Categorical | <ul><li>'SUSPECT_RACE_DESCRIPTION: (null), WHITE, BLACK HISPANIC, BLACK, WHITE HISPANIC, ASIAN/PAC.ISL, AMER IND, MALE'</li><li>'OFFICER_IN_UNIFORM_FLAG: Y, N, ('</li><li>'SUSPECT_HAIR_COLOR: BLK, BRO, BLD, XXX, (null), GRY, WHI, BLN, RED, ZZZ, PLE, GRN, SDY, ORG, BK, BA, BR, XX'</li></ul> |
| Day of Month | <ul><li>'Date.Day: 26, 24, 31, 7, 14, 21, 28, 5, 12, 19, 2, 9, 16, 23, 30, 4, 11, 18, 25, 1'</li><li>'Incident.Date.Day: 2, 3, 4, 5, 6, 7, 8, 9, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23'</li><li>'bibliography.publication.day: 1, 17, 16, 20, 29, 10, 14, 11, 9, 18, 19, 22, 25, 15, 6, 28, 27, 2, 12, 21'</li></ul> |
| Year | <ul><li>'Year: 2020, 2019, 2018, 2017, 2016, 2015, 2014'</li><li>'YEAR2: 2017'</li><li>'artist.birth.year: 1930, 1852, 1898, 1760, 1935, 1964, 1967, 1940, 1947, 1938, 1728, 1868, 1927, 1917, 1878, 1895, 1904, 1912, 1899, 1767'</li></ul> |
| Integer | <ul><li>'unfavorable: 51.0, 55.0, 47.0, 60.0, 61.0, 56.0, 58.0, 57.0, 59.0, 61.9, 62.0, 54.0, 52.0, 53.0, 66.0, 67.0, 63.0, 49.0, 52.42, 56.9'</li><li>'Data.Totals.Violent.Rape: 281, 252, 218, 192, 397, 367, 341, 371, 396, 494, 637, 661, 660, 751, 811, 738, 794, 929, 954, 1037'</li><li>'AVG: 93429, 78009, 76358, 76258, 75606, 73515, 72885, 71625, 71624, 70534, 69914, 69879, 69671, 68950, 68857, 68846, 68833, 68387, 68099, 67431'</li></ul> |
| Floating Point Number | <ul><li>'dimensions.width: 0.0, 305.0, 250.0, 756.0, 2095.0, 480.0, 858.0, 558.0, 628.0, 302.0, 1226.0, 1270.0, 940.0, 2276.0, 864.0, 1420.0, 330.0, 267.0, 1225.0, 660.0'</li><li>'Data.Fiber: 0.0, 0.2, 0.3, 0.4, 0.7, 0.1, 1.0, 0.6, 0.5, 1.9, 1.1, 2.3, 0.8, 1.6, 0.9, 1.2, 37.0, 4.5, 9.1, 1.5'</li><li>' "Weight(Pounds)": 112.9925, 136.4873, 153.0269, 142.3354, 144.2971, 123.3024, 141.4947, 136.4623, 112.3723, 120.6672, 127.4516, 114.143, 125.6107, 122.4618, 116.0866, 139.9975, 129.5023, 142.9733, 137.9025, 124.0449'</li></ul> |
| Percentage | <ul><li>'pct: 51.0, 48.0, 44.2, 49.0, 48.4, 49.2, 1.4, 47.0, 48.2, 1.6, 50.0, 42.0, 1.0, 40.0, 53.0, 43.0, 46.0, 52.0, 45.0, 47.3'</li><li>'pct: 51.6, 41.4, 45.7, 46.8, 5.2, 46.0, 2.0, 48.0, 47.0, 44.0, 4.0, 5.0, 53.0, 55.5, 32.2, 54.7, 40.5, 54.3, 43.7, 45.0'</li><li>'PCT.2: 95.5, 96.5, 94.0, 99.4, 97.6, 100.9, 101.0, 101.1, 96.9, 98.0, 97.9, 98.1, 94.8, 100.7, 99.3, 97.1, 98.9, 98.7, 96.1, 99.7'</li></ul> |
| Secondary Address | <ul><li>'STOP_LOCATION_APARTMENT: (null), 2, 7, 4TH, 2FL, ROOF, ROOF T, BASEME, LOBBY, 17TH, 2 FLOO, 12, 1701, HALLWA, 1E, 5D, SIDEWA, FRONT, 12C, None'</li></ul> |
| U.S. State Abbreviation | <ul><li>'abbrev: AL, AK, AZ, AR, CA, CO, CT, DE, DC, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME'</li><li>'recipient_st: AK, AL, AR, AZ, CA, CO, CT, DC, FL, GA, HI, IA, ID, IL, IN, KA, KS, KY, LA, MA'</li><li>'Incident.Location.State: WA, OR, KS, CA, CO, OK, AZ, IA, PA, TX, OH, LA, MT, UT, AR, IL, NV, NM, MN, MO'</li></ul> |
| Numeric identifier | <ul><li>'pollster_id: 568, 1189, 1508, 1302, 1597, 396, 458, 1699, 1361, 169, 1075, 1406, 241, 1523, 399, 1351, 1528, 1365, 1347, 57'</li><li>'SUPERVISING_OFFICER_COMMAND_CODE: 574, 863, 1, 861, 5, 6, 234, 849, 136, 7, 804, 750, 868, 9, 108, 13, 10, 181, 598, 230'</li><li>'pollster_rating_id: 245, 609, 48, 437, 88, 599, 600, 263, 280, 314, 124, 357, 667, 317, 494, 325, 522, 556, 593, 216'</li></ul> |
| Month Number | <ul><li>'mp_month: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12'</li><li>'bibliography.publication.month: 6, 11, 3, 8, 1, 10, 7, 2, 4, 5, 9, 12'</li><li>'Incident.Date.Month: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12'</li></ul> |
| Date | <ul><li>'Date.Full: 8/26/1990, 3/24/1991, 3/31/1991, 4/7/1991, 4/14/1991, 4/21/1991, 4/28/1991, 5/5/1991, 5/12/1991, 5/19/1991, 5/26/1991, 6/2/1991, 6/9/1991, 6/16/1991, 6/23/1991, 6/30/1991, 7/7/1991, 7/14/1991, 7/21/1991, 7/28/1991'</li><li>'STOP_FRISK_DATE: 1/16/2017, 2/8/2017, 2/20/2017, 2/21/2017, 2/17/2017, 2/25/2017, 3/3/2017, 3/16/2017, 3/31/2017, 4/2/2017, 4/4/2017, 3/24/2017, 4/6/2017, 4/18/2017, 5/6/2017, 5/10/2017, 5/17/2017, 5/7/2017, 5/24/2017, 6/8/2017'</li><li>'disb_dt: 15-Sep-15, 16-Nov-15, 30-Sep-15, 18-Dec-15, 22-Oct-15, 3-Dec-15, 23-Nov-15, 29-Feb-16, 18-Mar-16, 27-Feb-16, 17-Feb-16, 25-Feb-16, 25-Jan-16, 14-Jan-16, 12-Jan-16, 22-Jan-16, 1-Jan-16, 3-Jan-16, 6-Jan-16, 11-Jan-16'</li></ul> |
| Full Name | <ul><li>'candidate_name: Abigail A. Spanberger, Nicholas J. Freitas, Kara Eastman, Don Bacon, Tyler Schaeffer, Jill Schupp, Ann Wagner, Martin Schulte, Dana Balter, John Katko, Steve Williams, Christina Hale, Victoria Spartz, Kenneth Tucker, Joyce Ann Elliott, French Hill, Jared Forrest Golden, Dale John Crafts, Marie Newman, Mike Fricilone'</li><li>'bibliography.author.name: Austen, Jane, Gilman, Charlotte Perkins, Carroll, Lewis, Shelley, Mary Wollstonecraft, Kafka, Franz, Twain, Mark, Wilde, Oscar, Douglass, Frederick, Ibsen, Henrik, Melville, Herman, Doyle, Arthur Conan, Dickens, Charles, Joyce, James, Swift, Jonathan, Stoker, Bram, Machiavelli, Niccolo, Tolstoy, Leo, graf, Grimm, Wilhelm, Vatsyayana, Unknown'</li><li>'Name: Cristiano Ronaldo, L. Messi, Neymar, L. Suárez, M. Neuer, R. Lewandowski, De Gea, E. Hazard, T. Kroos, G. Higuaín, Sergio Ramos, K. De Bruyne, T. Courtois, A. Sánchez, L. Modrić, G. Bale, S. Agüero, G. Chiellini, G. Buffon, P. Dybala'</li></ul> |
| Day of Week | <ul><li>'DAY2: Monday, Wednesday, Tuesday, Friday, Saturday, Thursday, Sunday'</li><li>'DAY2: Monday, Wednesday, Tuesday, Friday, Saturday, Thursday, Sunday'</li><li>'day: Sun, Sat, Thur, Fri'</li></ul> |
| Timestamp | <ul><li>'created_at: 12/30/20 12:29, 11/2/20 21:26, 11/2/20 22:16, 11/2/20 21:32, 11/2/20 22:01, 11/2/20 22:18, 11/2/20 22:26, 11/2/20 23:31, 11/2/20 21:49, 10/31/20 17:22, 11/1/20 14:39, 11/2/20 08:22, 10/29/20 14:16, 10/31/20 08:36, 10/29/20 11:08, 10/29/20 09:00, 10/29/20 16:13, 10/29/20 16:14, 10/30/20 15:45, 10/28/20 09:24'</li><li>'created_at: 12/21/22 09:28, 12/21/22 12:52, 12/16/22 18:27, 12/16/22 21:10, 12/14/22 10:39, 12/14/22 08:22, 12/15/22 18:31, 12/14/22 14:13, 12/13/22 09:36, 12/14/22 08:23, 12/14/22 15:40, 12/15/22 09:40, 12/7/22 10:47, 12/7/22 08:17, 12/7/22 17:56, 12/15/22 09:50, 11/30/22 09:25, 11/23/22 08:46, 12/1/22 09:39, 12/5/22 08:29'</li><li>'created_at: 12/21/22 09:28, 12/21/22 12:52, 12/16/22 18:27, 12/16/22 21:10, 12/14/22 10:39, 12/14/22 08:22, 12/15/22 18:31, 12/14/22 14:13, 12/13/22 09:36, 12/14/22 08:23, 12/14/22 15:40, 12/15/22 09:40, 12/7/22 10:47, 12/7/22 08:17, 12/7/22 17:56, 12/15/22 09:50, 11/30/22 09:25, 11/23/22 08:46, 12/1/22 09:39, 12/5/22 08:29'</li></ul> |
| URL | <ul><li>'url: https://docs.cdn.yougov.com/c5o6xiw8t9/econtoplines.pdf, https://docs.cdn.yougov.com/by8wjw1hur/econTabReport.pdf, https://subscriber.politicopro.com/newsletter/2022/12/who-supports-the-payment-pause-00072673, https://news.yahoo.com/new-poll-shows-stark-partisan-divide-when-it-comes-to-americans-view-of-schools-132510314.html, https://harvardharrispoll.com/wp-content/uploads/2022/12/HHP_Dec2022_KeyResults.pdf, https://echeloninsights.com/in-the-news/december-2022-omnibus-2/, https://docs.cdn.yougov.com/b94ttrxy3v/econtoplines.pdf, https://docs.cdn.yougov.com/urts2xadfd/econTabReport.pdf, https://www.foxnews.com/politics/fox-news-poll-americans-show-little-enthusiasm-biden-trump-rematch-2024, https://poll.qu.edu/poll-release?releaseid=3863, https://www.usatoday.com/story/news/politics/2022/12/13/trump-support-gop-2024-presidential-race-poll/10882346002/, https://www.wsj.com/articles/ron-desantis-holds-early-lead-over-donald-trump-among-gop-primary-voters-wsj-poll-shows-11670989311?mod=hp_lead_pos5, https://www.cnn.com/2022/12/14/politics/biden-trump-2024-poll/index.html, https://docs.cdn.yougov.com/dshfq4wqyr/econtoplines.pdf, https://docs.cdn.yougov.com/qsanp37uhh/econTabReport.pdf, https://www.yahoo.com/news/poll-trump-loses-ground-with-republicans-after-kanye-west-nick-fuentes-dinner-100204385.html, https://docs.cdn.yougov.com/regijt79ge/November_Georgia_Ads_joined_earlier.pdf, https://docs.cdn.yougov.com/xcvx6iu6em/November_Georgia_Ads_joined.pdf, https://news.gallup.com/poll/406892/party-images-stable-midterm-elections.aspx, https://docs.cdn.yougov.com/2zebcvjoec/econtoplines.pdf'</li><li>'Flag: https://cdn.sofifa.org/flags/38.png, https://cdn.sofifa.org/flags/52.png, https://cdn.sofifa.org/flags/54.png, https://cdn.sofifa.org/flags/60.png, https://cdn.sofifa.org/flags/21.png, https://cdn.sofifa.org/flags/37.png, https://cdn.sofifa.org/flags/45.png, https://cdn.sofifa.org/flags/7.png, https://cdn.sofifa.org/flags/55.png, https://cdn.sofifa.org/flags/10.png, https://cdn.sofifa.org/flags/50.png, https://cdn.sofifa.org/flags/27.png, https://cdn.sofifa.org/flags/44.png, https://cdn.sofifa.org/flags/18.png, https://cdn.sofifa.org/flags/115.png, https://cdn.sofifa.org/flags/46.png, https://cdn.sofifa.org/flags/34.png, https://cdn.sofifa.org/flags/13.png, https://cdn.sofifa.org/flags/43.png, https://cdn.sofifa.org/flags/14.png'</li><li>'Photo: https://cdn.sofifa.org/48/18/players/20801.png, https://cdn.sofifa.org/48/18/players/158023.png, https://cdn.sofifa.org/48/18/players/190871.png, https://cdn.sofifa.org/48/18/players/176580.png, https://cdn.sofifa.org/48/18/players/167495.png, https://cdn.sofifa.org/48/18/players/188545.png, https://cdn.sofifa.org/48/18/players/193080.png, https://cdn.sofifa.org/48/18/players/183277.png, https://cdn.sofifa.org/48/18/players/182521.png, https://cdn.sofifa.org/48/18/players/167664.png, https://cdn.sofifa.org/48/18/players/155862.png, https://cdn.sofifa.org/48/18/players/192985.png, https://cdn.sofifa.org/48/18/players/192119.png, https://cdn.sofifa.org/48/18/players/184941.png, https://cdn.sofifa.org/48/18/players/177003.png, https://cdn.sofifa.org/48/18/players/173731.png, https://cdn.sofifa.org/48/18/players/153079.png, https://cdn.sofifa.org/48/18/players/138956.png, https://cdn.sofifa.org/48/18/players/1179.png, https://cdn.sofifa.org/48/18/players/211110.png'</li></ul> |
| Street Address | <ul><li>'STOP_LOCATION_FULL_ADDRESS: 180 GREENWICH STREET, WALL STREET && BROADWAY, 75 GREENE STREET, 429 WEST BROADWAY, WEST STREET && CHAMBERS STREET, CHAMBERS STREET && WEST BROADWAY, CORTLANDT STREET && CHURCH STREET, 111 FULTON STREET, 25 CLIFF STREET, SPRING STREET && AVENUE OF THE AMERICAS, 130 CEDAR STREET, 225 LIBERTY STREET, BARCLAY STREET && WEST STREET, 153 GREENWICH STREET, BATTERY PLACE && STATE STREET, MERCER STREET && BROOME STREET, WEST STREET && CANAL STREET, BROADWAY && PRINCE STREET, WEST BROADWAY && AVENUE OF THE AMERICAS, 3 SOUTH STREET'</li><li>'STOP_LOCATION_FULL_ADDRESS: 180 GREENWICH STREET, WALL STREET && BROADWAY, 75 GREENE STREET, 429 WEST BROADWAY, WEST STREET && CHAMBERS STREET, CHAMBERS STREET && WEST BROADWAY, CORTLANDT STREET && CHURCH STREET, 111 FULTON STREET, 25 CLIFF STREET, SPRING STREET && AVENUE OF THE AMERICAS, 130 CEDAR STREET, 225 LIBERTY STREET, BARCLAY STREET && WEST STREET, 153 GREENWICH STREET, BATTERY PLACE && STATE STREET, MERCER STREET && BROOME STREET, WEST STREET && CANAL STREET, BROADWAY && PRINCE STREET, WEST BROADWAY && AVENUE OF THE AMERICAS, 3 SOUTH STREET'</li><li>'STOP_LOCATION_FULL_ADDRESS: 180 GREENWICH STREET, WALL STREET && BROADWAY, 75 GREENE STREET, 429 WEST BROADWAY, WEST STREET && CHAMBERS STREET, CHAMBERS STREET && WEST BROADWAY, CORTLANDT STREET && CHURCH STREET, 111 FULTON STREET, 25 CLIFF STREET, SPRING STREET && AVENUE OF THE AMERICAS, 130 CEDAR STREET, 225 LIBERTY STREET, BARCLAY STREET && WEST STREET, 153 GREENWICH STREET, BATTERY PLACE && STATE STREET, MERCER STREET && BROOME STREET, WEST STREET && CANAL STREET, BROADWAY && PRINCE STREET, WEST BROADWAY && AVENUE OF THE AMERICAS, 3 SOUTH STREET'</li></ul> |
| Country ISO Code | <ul><li>'Champion Nationality: AUS, FRA, GBR, NZL, USA, SRB, SUI, SWE, CZE, ESP, GER, NED, CRO, BRA, RUS'</li><li>'Runner-up Nationality: AUS, GBR, NZL, FRA, USA, RSA, CZE, ARG, GER, SUI, ESP, CRO, ROM, DEN, TCH, URS, CZ, SRB, CND, SWE'</li></ul> |
| Partial timestamp | <ul><li>'created_at: 12/17/20 21:39, 6/14/21 15:36, 11/2/20 09:02, 11/2/20 12:49, 11/2/20 19:02, 11/2/20 14:04, 11/2/20 17:37, 11/2/20 18:39, 11/2/20 18:40, 11/4/20 09:17, 11/4/20 10:29, 11/4/20 10:32, 11/4/20 10:38, 11/4/20 10:39, 11/28/20 21:14, 11/2/20 21:25, 11/2/20 21:32, 11/2/20 22:12, 11/2/20 23:30, 11/2/20 23:33'</li><li>'bibliography.publication.full: June, 1998, November, 1999, March, 1994, June 17, 2008, August 16, 2005, August 20, 2006, August 29, 2006, January 10, 2006, March, 2001, June, 2001, October 14, 1892, July, 1998, July, 2003, January, 1994, October 1997, August 16, 2013, February 11, 2006, June 9, 2008, January 1, 1870, April, 2001'</li><li>'Rating.Experience: Below, Same, None, Above'</li></ul> |
| Longitude | <ul><li>'Longitude: 2,77228900, 2,77461100, 2,77370600, 2,77423900, 2,77654400, 2,79937600, 2,78064700, 2,77697400, 2,78928200, 2,78032200, 2,77731200, 2,77121300, 2,77167600, 2,78236500, 2,76694300, 2,77139500, 2,76872200, 2,76741500, 2,77156700, 2,82065100'</li><li>'Longitude: 6.85, 2.97, 2.53, -4.02, 10.87, 11.93, 12.7, 14.139, 14.426, 13.897, 14.83, 15.213, 15.064, 14.933, 14.962, 14.999, 12.02, 14.399, 23.336, 24.439'</li><li>'long: 40.65531753386127, 35.52146509142811, 41.04610174058556, 37.25718863973695, 37.73038191275334, 38.78755702518432, 36.31538469187874, 38.3542649521305, 40.33741738725765, 36.831052736369664, 37.39711396680899, 38.28297641253209, 40.25037415629944, 39.12501528359793, 40.179108531876246, 38.165405118101205, 40.28234452941448, 37.1590112746327, 40.08056518798263, 38.45329795732872'</li></ul> |
| Country Name | <ul><li>'Geography: United States'</li><li>'location.citizenship: United States, Mexico, Switzerland, Spain, Hong Kong, Taiwan, Germany, Saudi Arabia, Japan, Sweden, France, Canada, Philippines, Indonesia, South Korea, Malaysia, Italy, Singapore, Nigeria, Brazil'</li><li>'Nation: Afghanistan, Albania, Algeria, Andorra, Angola, Antigua and Barbuda, Argentina, Armenia, Australia, Austria, Azerbaijan, Bahamas, The, Bahrain, Bangladesh, Barbados, Belarus, Belgium, Belize, Benin, Bhutan'</li></ul> |
| Boolean | <ul><li>'nationwide_batch: False'</li><li>'chasing: False, True'</li><li>'ID Workforce Status: True'</li></ul> |
| Short text | <ul><li>'data.title: Backs, Illustration to Judith Shakespeare, Tri-Boro Barber Shop, Portrait of the Engraver Francesco Bartolozzi, Lady in Niche, Assassinations, Zebe, TRANSFERENCE ZONE, Five Sunsets in One Hour, Cartmel Fell, Composition: River in a Gorge, Winters Sleep, Christs Cross and Adams Tree, Space Construction with a Spiral, Toy Sailing Boats, the Round Pond, The Mutilated, The Cruise, Cut Bottle Relief, The Struggle, Figures in a Garden'</li><li>'specific_location: None, on tree stump, on tree roots, under a tree, in b/w trees, tree, Branch, bush, hiding in bushes, On a rock, trash can, Under bench, near 65th St arch, in the European Beech, on tree knob, Behind fence, tree, tree near large rock on Bridle Path, Climbing tree, "FIELD", bottom of tree'</li><li>"Facility.Name: Southeast Alabama Medical Center, Marshall Medical Center South, Eliza Coffee Memorial Hospital, Mizell Memorial Hospital, Crenshaw Community Hospital, St Vincent's East, Dekalb Regional Medical Center, Shelby Baptist Medical Center, Callahan Eye Hospital, Helen Keller Memorial Hospital, Dale Medical Center, Floyd Cherokee Medical Center, Baptist Medical Center South, Jackson Hospital & Clinic Inc, East Alabama Medical Center, Tanner Medical Center-east Alabama, University Of Alabama Hospital, Community Hospital Inc, Cullman Regional Medical Center, Andalusia Health"</li></ul> |
| Slug | <ul><li>'Slug County: baldwin-county-al, calhoun-county-al, coffee-county-al, colbert-county-al, covington-county-al, cullman-county-al, dale-county-al, dallas-county-al, etowah-county-al, jackson-county-al, jefferson-county-al, lee-county-al, limestone-county-al, madison-county-al, marshall-county-al, mobile-county-al, montgomery-county-al, perry-county-al, pike-county-al, randolph-county-al'</li><li>'Slug Geography: united-states'</li><li>'Slug Detailed Occupation: physicians, physicians-surgeons, lawyers-judges-magistrates-other-judicial-workers, medical-health-services-managers, chief-executives-legislators, veterinarians, social-community-service-managers, securities-commodities-financial-services-sales-agents, petroleum-mining-geological-engineers-including-mining-safety-engineers, economists, miscellaneous-social-scientists-including-survey-researchers-sociologists, natural-sciences-managers, geoscientists-and-hydrologists-except-geographers, detectives-criminal-investigators, judicial-law-clerks, other-psychologists, architectural-engineering-managers, education-administrators, astronomers-physicists, public-relations-and-fundraising-managers'</li></ul> |
| Postal Code | <ul><li>'Code postal: 77700.0, nan'</li></ul> |
| Structured field | <ul><li>'SUSPECT_HEIGHT: 5.8, 6.2, 5.1, 5, 5.11, 5.5, 5.4, 5.7, 6.1, 6, 6.3, 5.6, 5.9, 6.4, 5.2, 6.5, 5.3, 4.11, , 2.2'</li><li>'SUSPECT_HEIGHT: 5.8, 6.2, 5.1, 5, 5.11, 5.5, 5.4, 5.7, 6.1, 6, 6.3, 5.6, 5.9, 6.4, 5.2, 6.5, 5.3, 4.11, , 2.2'</li><li>'Score: 6-3, 7-5, 6-2, 6-3, 6-4, 6-2, 6-4, 6-2, 6-2, 6-4, 6-4, 7-5, 6-1, 3-6, 6-1, 3-6, 6-4, 6-3, 6-3, 4-6, 8-6, 8-6, 6-1, 6-3, 8-6, 6-2, 6-4, 6-3, 6-2, 4-6, 3-6, 6-4, 6-8, 1-6, 6-2, 6-2, 6-2, 4-6, 7-5, 6-4, 6-4, 1-6, 6-3, 3-6, 6-4, 6-4, 6-4, 6-3, 3-6, 6-0, 6-1, 7-5, 8-6, 6-4, 4-6, 6-2, 6-3, 7-5, 6-3, 6-0, 6-2, 6-8, 5-7, 8-6, 6-3, 10-8, 6-2, 8-6, 3-6, 6-1, 6-3, 6-2, 6-3, 7-5, 6-0'</li></ul> |
| Alphanumeric identifier | <ul><li>'ID Geography: 04000US04, 04000US06, 04000US32, 04000US41'</li><li>'ID County: 05000US01003, 05000US01015, 05000US01031, 05000US01033, 05000US01039, 05000US01043, 05000US01045, 05000US01047, 05000US01055, 05000US01071, 05000US01073, 05000US01081, 05000US01083, 05000US01089, 05000US01095, 05000US01097, 05000US01101, 05000US01105, 05000US01109, 05000US01111'</li><li>'ID Geography: 01000US, 04000US04, 04000US06, 04000US32, 04000US41, 31000US31080, 31000US40140, 31000US41740, 31000US41860'</li></ul> |
| Color | <ul><li>'color: Yellow, Black, White'</li><li>'primary_fur_color: None, Gray, Cinnamon, Black'</li><li>'highlight_fur_color: None, Cinnamon, White, Gray, Cinnamon, White, Gray, White, Black, Cinnamon, White, Black, Black, White, Black, Cinnamon, Gray, Black'</li></ul> |
| Month Name | <ul><li>'MONTH2: January, February, March, April, May, June, July, August, September, October, November, December'</li><li>'bibliography.publication.month name: June, November, March, August, January, October, July, February, April, May, September, December'</li><li>'MONTH2: January, February, March, April, May, June, July, August, September, October, November, December'</li></ul> |
| Currency Code | <ul><li>'cur_name: AFN, DZD, AOA, ARS, AMD, AZN, BDT, INR, BYR, XOF, BTN, BOB, BIF, KHR, XAF, CVE, CNY, COP, USD, CDF'</li></ul> |
| Time | <ul><li>'STOP_FRISK_TIME: 14:26:00, 11:10:00, 11:35:00, 13:20:00, 21:25:00, 20:00:00, 19:58:00, 13:15:00, 8:16:00, 18:44:00, 22:30:00, 4:45:00, 18:30:00, 0:00:00, 9:58:00, 11:15:00, 13:00:00, 8:00:00, 14:57:00, 4:15:00'</li><li>'STOP_FRISK_TIME: 14:26:00, 11:10:00, 11:35:00, 13:20:00, 21:25:00, 20:00:00, 19:58:00, 13:15:00, 8:16:00, 18:44:00, 22:30:00, 4:45:00, 18:30:00, 0:00:00, 9:58:00, 11:15:00, 13:00:00, 8:00:00, 14:57:00, 4:15:00'</li><li>'STOP_FRISK_TIME: 14:26:00, 11:10:00, 11:35:00, 13:20:00, 21:25:00, 20:00:00, 19:58:00, 13:15:00, 8:16:00, 18:44:00, 22:30:00, 4:45:00, 18:30:00, 0:00:00, 9:58:00, 11:15:00, 13:00:00, 8:00:00, 14:57:00, 4:15:00'</li></ul> |
| Last Name | <ul><li>'candidat: Bush, Perot, Clinton'</li><li>'answer: Spanberger, Freitas, Eastman, Bacon, Schaeffer, Schupp, Wagner, Schulte, Balter, Katko, Williams, Hale, Spartz, Tucker, Elliott, Hill, Golden, Crafts, Newman, Fricilone'</li></ul> |
| U.S. State | <ul><li>'Slug Geography: california'</li><li>'state_name: Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, District of Columbia, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine'</li><li>'state: Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland'</li></ul> |
| Street Name | <ul><li>'STOP_LOCATION_STREET_NAME: GREENWICH STREET, WALL STREET, GREENE STREET, WEST BROADWAY, WEST STREET, CHAMBERS STREET, CORTLANDT STREET, FULTON STREET, CLIFF STREET, SPRING STREET, CEDAR STREET, LIBERTY STREET, BARCLAY STREET, BATTERY PLACE, MERCER STREET, BROADWAY, SOUTH STREET, THOMPSON STREET, JAY STREET, CHURCH STREET'</li><li>'STOP_LOCATION_STREET_NAME: GREENWICH STREET, WALL STREET, GREENE STREET, WEST BROADWAY, WEST STREET, CHAMBERS STREET, CORTLANDT STREET, FULTON STREET, CLIFF STREET, SPRING STREET, CEDAR STREET, LIBERTY STREET, BARCLAY STREET, BATTERY PLACE, MERCER STREET, BROADWAY, SOUTH STREET, THOMPSON STREET, JAY STREET, CHURCH STREET'</li><li>'STOP_LOCATION_STREET_NAME: GREENWICH STREET, WALL STREET, GREENE STREET, WEST BROADWAY, WEST STREET, CHAMBERS STREET, CORTLANDT STREET, FULTON STREET, CLIFF STREET, SPRING STREET, CEDAR STREET, LIBERTY STREET, BARCLAY STREET, BATTERY PLACE, MERCER STREET, BROADWAY, SOUTH STREET, THOMPSON STREET, JAY STREET, CHURCH STREET'</li></ul> |
| AM/PM | <ul><li>'shift: PM, AM'</li></ul> |
| Occupation | <ul><li>'Detailed Occupation: Physicians, Physicians & surgeons, Lawyers, & judges, magistrates, & other judicial workers, Medical & health services managers, Chief executives & legislators, Veterinarians, Social & community service managers, Securities, commodities, & financial services sales agents, Petroleum, mining & geological engineers, including mining safety engineers, Economists, Miscellaneous social scientists, including survey researchers & sociologists, Natural sciences managers, Geoscientists and hydrologists, except geographers, Detectives & criminal investigators, Judicial law clerks, Other psychologists, Architectural & engineering managers, Education administrators, Astronomers & physicists, Public relations and fundraising managers'</li><li>'occupation: Operatives, Craftsmen, Sales, Other, Managers/admin, Professional/technical, Clerical/unskilled, Laborers, Transport, Service, nan, Household workers, Farm laborers, Farmers'</li><li>'Detailed Occupation: Other managers, Cashiers, Retail salespersons, Driver/sales workers & truck drivers, Registered nurses'</li></ul> |
| Zip Code | <ul><li>'recipient_zip: 995084442, 99503, 995163436, 352124572, 35216, 35976, 358021277, 352174710, 35203, 35233, 35805, 72716, 72201, 72035, 72015, 72223, 72019, 72113, 72758, 72227'</li><li>'STOP_LOCATION_ZIP_CODE: (null), 20292, AVENUE, 5 AVEN, 10019, 22768, 10035, 10026, 10128, 24231, 10030, 10039, 23874, 11213, 11233, 100652, 10451, 23543, 100745, PROSPE'</li><li>'zip_codes: nan, 12081.0, 10090.0, 12423.0, 12420.0'</li></ul> |
| Company Name | <ul><li>"company.name: Microsoft, Berkshire Hathaway, Telmex, F. Hoffmann-La Roche, Zara, Henderson Land Development, Oracle, Lin Yuan Group, Aldi, Sun Hung Kai Properties, Kingdom Holding Company, Koch industries, Cheung king, Walmart, Seibu Corporation, Las Vegas Sands, Aldi Nord, Tetra Pak, BMW, L'Oreal"</li></ul> |
| First Name | <ul><li>'Top Name: Mary, Linda, Debra, Lisa, Michelle, Jennifer, Jessica, Samantha, Ashley, Hannah, Emily, Madison, Emma, Isabella, Sophia, Olivia, John, Robert, James, David'</li></ul> |
| Very short text | <ul><li>'above_ground_sighter_measurement: None, FALSE, 4, 3, 30, 10, 6, 24, 8, 25, 5, 50, 70, 12, 2, 20, 7, 13, 15, 28'</li><li>'review_reason_code: 2, 1, 4, None, 5, 3, 7, 3?, 8, D, ?, 3, 1, 1 or 2, D or 1, 7B, 1, 2, 1 OR 2, D OR 2, B, 4?'</li><li>'status: N, Y, REMOVE, None, 1, ?, H, R, M, T'</li></ul> |
| License Plate | <ul><li>'plate: AZIZ714, BATBOX1, BBOMBS, BEACHY1, BLK PWR5, BOT TAK, CHERIPI, CIO FTW, DAVES88, DMOBGFY, DOITFKR, EGGPUTT, F DIABDZ, FJ 666, FKK OFF, FKN BLAK, FLT ATCK, F LUPUS, HVNNHEL, H8DES'</li></ul> |
| URI | <ul><li>'url: https://docs.cdn.yougov.com/c5o6xiw8t9/econtoplines.pdf, https://docs.cdn.yougov.com/by8wjw1hur/econTabReport.pdf, https://subscriber.politicopro.com/newsletter/2022/12/who-supports-the-payment-pause-00072673, https://news.yahoo.com/new-poll-shows-stark-partisan-divide-when-it-comes-to-americans-view-of-schools-132510314.html, https://harvardharrispoll.com/wp-content/uploads/2022/12/HHP_Dec2022_KeyResults.pdf, https://echeloninsights.com/in-the-news/december-2022-omnibus-2/, https://docs.cdn.yougov.com/b94ttrxy3v/econtoplines.pdf, https://docs.cdn.yougov.com/urts2xadfd/econTabReport.pdf, https://www.foxnews.com/politics/fox-news-poll-americans-show-little-enthusiasm-biden-trump-rematch-2024, https://poll.qu.edu/poll-release?releaseid=3863, https://www.usatoday.com/story/news/politics/2022/12/13/trump-support-gop-2024-presidential-race-poll/10882346002/, https://www.wsj.com/articles/ron-desantis-holds-early-lead-over-donald-trump-among-gop-primary-voters-wsj-poll-shows-11670989311?mod=hp_lead_pos5, https://www.cnn.com/2022/12/14/politics/biden-trump-2024-poll/index.html, https://docs.cdn.yougov.com/dshfq4wqyr/econtoplines.pdf, https://docs.cdn.yougov.com/qsanp37uhh/econTabReport.pdf, https://www.yahoo.com/news/poll-trump-loses-ground-with-republicans-after-kanye-west-nick-fuentes-dinner-100204385.html, https://docs.cdn.yougov.com/regijt79ge/November_Georgia_Ads_joined_earlier.pdf, https://docs.cdn.yougov.com/xcvx6iu6em/November_Georgia_Ads_joined.pdf, https://news.gallup.com/poll/406892/party-images-stable-midterm-elections.aspx, https://docs.cdn.yougov.com/2zebcvjoec/econtoplines.pdf'</li></ul> |
| City Name | <ul><li>'Incident.Location.City: Shelton, Aloha, Wichita, San Francisco, Evans, Guthrie, Chandler, Assaria, Burlington, Knoxville, Stockton, Freeport, Columbus, Des Moines, New Orleans, Huntley, Salt Lake City, Strong, Syracuse, England'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6705 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("quantisan/bge-small-en-v1.5-93dataset")
# Run inference
preds = model("variety: Western, Eastern")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 24.0542 | 111 |
| Label | Training Sample Count |
|:------------------------|:----------------------|
| Categorical | 8 |
| Timestamp | 5 |
| Date | 8 |
| Integer | 8 |
| Partial timestamp | 4 |
| Short text | 8 |
| Very short text | 3 |
| AM/PM | 1 |
| Boolean | 8 |
| City Name | 1 |
| Color | 3 |
| Company Name | 1 |
| Country ISO Code | 2 |
| Country Name | 8 |
| Currency Code | 1 |
| Day of Month | 4 |
| Day of Week | 4 |
| First Name | 1 |
| Floating Point Number | 8 |
| Full Name | 8 |
| Last Name | 2 |
| Latitude | 4 |
| License Plate | 1 |
| Longitude | 4 |
| Month Name | 6 |
| Month Number | 4 |
| Occupation | 3 |
| Postal Code | 1 |
| Secondary Address | 1 |
| Slug | 8 |
| Street Address | 3 |
| Street Name | 3 |
| Time | 3 |
| U.S. State | 8 |
| U.S. State Abbreviation | 6 |
| URI | 1 |
| URL | 8 |
| Year | 8 |
| Zip Code | 4 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.1131 | - |
| 0.0100 | 50 | 0.2113 | - |
| 0.0200 | 100 | 0.1906 | - |
| 0.0301 | 150 | 0.1843 | - |
| 0.0401 | 200 | 0.161 | - |
| 0.0501 | 250 | 0.1418 | - |
| 0.0601 | 300 | 0.131 | - |
| 0.0701 | 350 | 0.1224 | - |
| 0.0802 | 400 | 0.114 | - |
| 0.0902 | 450 | 0.1039 | - |
| 0.1002 | 500 | 0.0825 | - |
| 0.1102 | 550 | 0.0838 | - |
| 0.1202 | 600 | 0.0745 | - |
| 0.1303 | 650 | 0.0705 | - |
| 0.1403 | 700 | 0.0586 | - |
| 0.1503 | 750 | 0.0552 | - |
| 0.1603 | 800 | 0.0567 | - |
| 0.1703 | 850 | 0.0553 | - |
| 0.1804 | 900 | 0.0456 | - |
| 0.1904 | 950 | 0.0443 | - |
| 0.2004 | 1000 | 0.0422 | - |
| 0.2104 | 1050 | 0.0363 | - |
| 0.2204 | 1100 | 0.0397 | - |
| 0.2305 | 1150 | 0.0361 | - |
| 0.2405 | 1200 | 0.0269 | - |
| 0.2505 | 1250 | 0.0331 | - |
| 0.2605 | 1300 | 0.0278 | - |
| 0.2705 | 1350 | 0.0297 | - |
| 0.2806 | 1400 | 0.022 | - |
| 0.2906 | 1450 | 0.0265 | - |
| 0.3006 | 1500 | 0.0344 | - |
| 0.3106 | 1550 | 0.0218 | - |
| 0.3206 | 1600 | 0.0275 | - |
| 0.3307 | 1650 | 0.0275 | - |
| 0.3407 | 1700 | 0.0207 | - |
| 0.3507 | 1750 | 0.0156 | - |
| 0.3607 | 1800 | 0.0246 | - |
| 0.3707 | 1850 | 0.0154 | - |
| 0.3808 | 1900 | 0.0117 | - |
| 0.3908 | 1950 | 0.0201 | - |
| 0.4008 | 2000 | 0.0153 | - |
| 0.4108 | 2050 | 0.018 | - |
| 0.4208 | 2100 | 0.017 | - |
| 0.4309 | 2150 | 0.011 | - |
| 0.4409 | 2200 | 0.0158 | - |
| 0.4509 | 2250 | 0.015 | - |
| 0.4609 | 2300 | 0.0109 | - |
| 0.4709 | 2350 | 0.0151 | - |
| 0.4810 | 2400 | 0.0085 | - |
| 0.4910 | 2450 | 0.0121 | - |
| 0.5010 | 2500 | 0.0118 | - |
| 0.5110 | 2550 | 0.0083 | - |
| 0.5210 | 2600 | 0.0094 | - |
| 0.5311 | 2650 | 0.0078 | - |
| 0.5411 | 2700 | 0.0123 | - |
| 0.5511 | 2750 | 0.0085 | - |
| 0.5611 | 2800 | 0.0046 | - |
| 0.5711 | 2850 | 0.0081 | - |
| 0.5812 | 2900 | 0.0085 | - |
| 0.5912 | 2950 | 0.0064 | - |
| 0.6012 | 3000 | 0.0113 | - |
| 0.6112 | 3050 | 0.0087 | - |
| 0.6212 | 3100 | 0.0071 | - |
| 0.6313 | 3150 | 0.01 | - |
| 0.6413 | 3200 | 0.0093 | - |
| 0.6513 | 3250 | 0.0056 | - |
| 0.6613 | 3300 | 0.007 | - |
| 0.6713 | 3350 | 0.0076 | - |
| 0.6814 | 3400 | 0.0077 | - |
| 0.6914 | 3450 | 0.0038 | - |
| 0.7014 | 3500 | 0.0051 | - |
| 0.7114 | 3550 | 0.0063 | - |
| 0.7214 | 3600 | 0.004 | - |
| 0.7315 | 3650 | 0.0036 | - |
| 0.7415 | 3700 | 0.0043 | - |
| 0.7515 | 3750 | 0.0086 | - |
| 0.7615 | 3800 | 0.0051 | - |
| 0.7715 | 3850 | 0.0056 | - |
| 0.7816 | 3900 | 0.0042 | - |
| 0.7916 | 3950 | 0.0062 | - |
| 0.8016 | 4000 | 0.0058 | - |
| 0.8116 | 4050 | 0.0034 | - |
| 0.8216 | 4100 | 0.0062 | - |
| 0.8317 | 4150 | 0.0091 | - |
| 0.8417 | 4200 | 0.0056 | - |
| 0.8517 | 4250 | 0.0039 | - |
| 0.8617 | 4300 | 0.0072 | - |
| 0.8717 | 4350 | 0.0051 | - |
| 0.8818 | 4400 | 0.0025 | - |
| 0.8918 | 4450 | 0.0051 | - |
| 0.9018 | 4500 | 0.0049 | - |
| 0.9118 | 4550 | 0.0024 | - |
| 0.9218 | 4600 | 0.0026 | - |
| 0.9319 | 4650 | 0.0046 | - |
| 0.9419 | 4700 | 0.0024 | - |
| 0.9519 | 4750 | 0.0026 | - |
| 0.9619 | 4800 | 0.0045 | - |
| 0.9719 | 4850 | 0.0022 | - |
| 0.9820 | 4900 | 0.0042 | - |
| 0.9920 | 4950 | 0.0067 | - |
| 1.0 | 4990 | - | 0.0996 |
| 1.0020 | 5000 | 0.0044 | - |
| 1.0120 | 5050 | 0.0023 | - |
| 1.0220 | 5100 | 0.0025 | - |
| 1.0321 | 5150 | 0.004 | - |
| 1.0421 | 5200 | 0.002 | - |
| 1.0521 | 5250 | 0.0042 | - |
| 1.0621 | 5300 | 0.0028 | - |
| 1.0721 | 5350 | 0.006 | - |
| 1.0822 | 5400 | 0.0043 | - |
| 1.0922 | 5450 | 0.0065 | - |
| 1.1022 | 5500 | 0.0042 | - |
| 1.1122 | 5550 | 0.004 | - |
| 1.1222 | 5600 | 0.0045 | - |
| 1.1323 | 5650 | 0.0049 | - |
| 1.1423 | 5700 | 0.0042 | - |
| 1.1523 | 5750 | 0.0044 | - |
| 1.1623 | 5800 | 0.002 | - |
| 1.1723 | 5850 | 0.0037 | - |
| 1.1824 | 5900 | 0.0038 | - |
| 1.1924 | 5950 | 0.0071 | - |
| 1.2024 | 6000 | 0.0044 | - |
| 1.2124 | 6050 | 0.0031 | - |
| 1.2224 | 6100 | 0.0021 | - |
| 1.2325 | 6150 | 0.0019 | - |
| 1.2425 | 6200 | 0.002 | - |
| 1.2525 | 6250 | 0.0059 | - |
| 1.2625 | 6300 | 0.002 | - |
| 1.2725 | 6350 | 0.0036 | - |
| 1.2826 | 6400 | 0.0019 | - |
| 1.2926 | 6450 | 0.0041 | - |
| 1.3026 | 6500 | 0.0042 | - |
| 1.3126 | 6550 | 0.0062 | - |
| 1.3226 | 6600 | 0.002 | - |
| 1.3327 | 6650 | 0.0016 | - |
| 1.3427 | 6700 | 0.0019 | - |
| 1.3527 | 6750 | 0.0055 | - |
| 1.3627 | 6800 | 0.0042 | - |
| 1.3727 | 6850 | 0.0023 | - |
| 1.3828 | 6900 | 0.0018 | - |
| 1.3928 | 6950 | 0.0041 | - |
| 1.4028 | 7000 | 0.008 | - |
| 1.4128 | 7050 | 0.0021 | - |
| 1.4228 | 7100 | 0.0017 | - |
| 1.4329 | 7150 | 0.0021 | - |
| 1.4429 | 7200 | 0.0017 | - |
| 1.4529 | 7250 | 0.0035 | - |
| 1.4629 | 7300 | 0.002 | - |
| 1.4729 | 7350 | 0.0016 | - |
| 1.4830 | 7400 | 0.0014 | - |
| 1.4930 | 7450 | 0.0041 | - |
| 1.5030 | 7500 | 0.0053 | - |
| 1.5130 | 7550 | 0.0026 | - |
| 1.5230 | 7600 | 0.002 | - |
| 1.5331 | 7650 | 0.0017 | - |
| 1.5431 | 7700 | 0.0017 | - |
| 1.5531 | 7750 | 0.0016 | - |
| 1.5631 | 7800 | 0.0021 | - |
| 1.5731 | 7850 | 0.0039 | - |
| 1.5832 | 7900 | 0.0034 | - |
| 1.5932 | 7950 | 0.0061 | - |
| 1.6032 | 8000 | 0.0025 | - |
| 1.6132 | 8050 | 0.002 | - |
| 1.6232 | 8100 | 0.0017 | - |
| 1.6333 | 8150 | 0.0016 | - |
| 1.6433 | 8200 | 0.0015 | - |
| 1.6533 | 8250 | 0.0037 | - |
| 1.6633 | 8300 | 0.0015 | - |
| 1.6733 | 8350 | 0.0035 | - |
| 1.6834 | 8400 | 0.0023 | - |
| 1.6934 | 8450 | 0.0051 | - |
| 1.7034 | 8500 | 0.0041 | - |
| 1.7134 | 8550 | 0.0018 | - |
| 1.7234 | 8600 | 0.0016 | - |
| 1.7335 | 8650 | 0.0016 | - |
| 1.7435 | 8700 | 0.0013 | - |
| 1.7535 | 8750 | 0.0022 | - |
| 1.7635 | 8800 | 0.0015 | - |
| 1.7735 | 8850 | 0.0017 | - |
| 1.7836 | 8900 | 0.0035 | - |
| 1.7936 | 8950 | 0.0013 | - |
| 1.8036 | 9000 | 0.0015 | - |
| 1.8136 | 9050 | 0.0034 | - |
| 1.8236 | 9100 | 0.0013 | - |
| 1.8337 | 9150 | 0.0037 | - |
| 1.8437 | 9200 | 0.0037 | - |
| 1.8537 | 9250 | 0.0014 | - |
| 1.8637 | 9300 | 0.0014 | - |
| 1.8737 | 9350 | 0.0013 | - |
| 1.8838 | 9400 | 0.0016 | - |
| 1.8938 | 9450 | 0.0013 | - |
| 1.9038 | 9500 | 0.0038 | - |
| 1.9138 | 9550 | 0.0013 | - |
| 1.9238 | 9600 | 0.0038 | - |
| 1.9339 | 9650 | 0.0013 | - |
| 1.9439 | 9700 | 0.0012 | - |
| 1.9539 | 9750 | 0.0016 | - |
| 1.9639 | 9800 | 0.0053 | - |
| 1.9739 | 9850 | 0.0018 | - |
| 1.9840 | 9900 | 0.0036 | - |
| 1.9940 | 9950 | 0.0014 | - |
| 2.0 | 9980 | - | 0.1052 |
| 2.0040 | 10000 | 0.0012 | - |
| 2.0140 | 10050 | 0.0014 | - |
| 2.0240 | 10100 | 0.0013 | - |
| 2.0341 | 10150 | 0.0014 | - |
| 2.0441 | 10200 | 0.0012 | - |
| 2.0541 | 10250 | 0.0014 | - |
| 2.0641 | 10300 | 0.0013 | - |
| 2.0741 | 10350 | 0.0012 | - |
| 2.0842 | 10400 | 0.0013 | - |
| 2.0942 | 10450 | 0.0033 | - |
| 2.1042 | 10500 | 0.0013 | - |
| 2.1142 | 10550 | 0.003 | - |
| 2.1242 | 10600 | 0.0036 | - |
| 2.1343 | 10650 | 0.0013 | - |
| 2.1443 | 10700 | 0.0036 | - |
| 2.1543 | 10750 | 0.0037 | - |
| 2.1643 | 10800 | 0.0024 | - |
| 2.1743 | 10850 | 0.0038 | - |
| 2.1844 | 10900 | 0.0014 | - |
| 2.1944 | 10950 | 0.0012 | - |
| 2.2044 | 11000 | 0.0035 | - |
| 2.2144 | 11050 | 0.0015 | - |
| 2.2244 | 11100 | 0.0012 | - |
| 2.2345 | 11150 | 0.0012 | - |
| 2.2445 | 11200 | 0.0011 | - |
| 2.2545 | 11250 | 0.0035 | - |
| 2.2645 | 11300 | 0.0012 | - |
| 2.2745 | 11350 | 0.0011 | - |
| 2.2846 | 11400 | 0.0011 | - |
| 2.2946 | 11450 | 0.0011 | - |
| 2.3046 | 11500 | 0.0035 | - |
| 2.3146 | 11550 | 0.0012 | - |
| 2.3246 | 11600 | 0.0011 | - |
| 2.3347 | 11650 | 0.0011 | - |
| 2.3447 | 11700 | 0.0014 | - |
| 2.3547 | 11750 | 0.0011 | - |
| 2.3647 | 11800 | 0.0011 | - |
| 2.3747 | 11850 | 0.0012 | - |
| 2.3848 | 11900 | 0.0011 | - |
| 2.3948 | 11950 | 0.001 | - |
| 2.4048 | 12000 | 0.001 | - |
| 2.4148 | 12050 | 0.0011 | - |
| 2.4248 | 12100 | 0.0011 | - |
| 2.4349 | 12150 | 0.0011 | - |
| 2.4449 | 12200 | 0.001 | - |
| 2.4549 | 12250 | 0.0034 | - |
| 2.4649 | 12300 | 0.0011 | - |
| 2.4749 | 12350 | 0.0013 | - |
| 2.4850 | 12400 | 0.0012 | - |
| 2.4950 | 12450 | 0.0015 | - |
| 2.5050 | 12500 | 0.0011 | - |
| 2.5150 | 12550 | 0.0034 | - |
| 2.5251 | 12600 | 0.001 | - |
| 2.5351 | 12650 | 0.0011 | - |
| 2.5451 | 12700 | 0.0011 | - |
| 2.5551 | 12750 | 0.001 | - |
| 2.5651 | 12800 | 0.001 | - |
| 2.5752 | 12850 | 0.0034 | - |
| 2.5852 | 12900 | 0.0033 | - |
| 2.5952 | 12950 | 0.0011 | - |
| 2.6052 | 13000 | 0.001 | - |
| 2.6152 | 13050 | 0.001 | - |
| 2.6253 | 13100 | 0.0012 | - |
| 2.6353 | 13150 | 0.0011 | - |
| 2.6453 | 13200 | 0.0033 | - |
| 2.6553 | 13250 | 0.0034 | - |
| 2.6653 | 13300 | 0.001 | - |
| 2.6754 | 13350 | 0.001 | - |
| 2.6854 | 13400 | 0.0034 | - |
| 2.6954 | 13450 | 0.001 | - |
| 2.7054 | 13500 | 0.001 | - |
| 2.7154 | 13550 | 0.001 | - |
| 2.7255 | 13600 | 0.0009 | - |
| 2.7355 | 13650 | 0.001 | - |
| 2.7455 | 13700 | 0.001 | - |
| 2.7555 | 13750 | 0.0009 | - |
| 2.7655 | 13800 | 0.001 | - |
| 2.7756 | 13850 | 0.0009 | - |
| 2.7856 | 13900 | 0.0031 | - |
| 2.7956 | 13950 | 0.001 | - |
| 2.8056 | 14000 | 0.0031 | - |
| 2.8156 | 14050 | 0.0033 | - |
| 2.8257 | 14100 | 0.001 | - |
| 2.8357 | 14150 | 0.0009 | - |
| 2.8457 | 14200 | 0.0009 | - |
| 2.8557 | 14250 | 0.0009 | - |
| 2.8657 | 14300 | 0.001 | - |
| 2.8758 | 14350 | 0.001 | - |
| 2.8858 | 14400 | 0.0033 | - |
| 2.8958 | 14450 | 0.001 | - |
| 2.9058 | 14500 | 0.001 | - |
| 2.9158 | 14550 | 0.001 | - |
| 2.9259 | 14600 | 0.0033 | - |
| 2.9359 | 14650 | 0.001 | - |
| 2.9459 | 14700 | 0.0009 | - |
| 2.9559 | 14750 | 0.001 | - |
| 2.9659 | 14800 | 0.001 | - |
| 2.9760 | 14850 | 0.0009 | - |
| 2.9860 | 14900 | 0.0009 | - |
| 2.9960 | 14950 | 0.0009 | - |
| 3.0 | 14970 | - | 0.1077 |
| 3.0060 | 15000 | 0.0033 | - |
| 3.0160 | 15050 | 0.0009 | - |
| 3.0261 | 15100 | 0.0009 | - |
| 3.0361 | 15150 | 0.0009 | - |
| 3.0461 | 15200 | 0.0009 | - |
| 3.0561 | 15250 | 0.0008 | - |
| 3.0661 | 15300 | 0.001 | - |
| 3.0762 | 15350 | 0.0009 | - |
| 3.0862 | 15400 | 0.0009 | - |
| 3.0962 | 15450 | 0.0032 | - |
| 3.1062 | 15500 | 0.0009 | - |
| 3.1162 | 15550 | 0.0009 | - |
| 3.1263 | 15600 | 0.0009 | - |
| 3.1363 | 15650 | 0.0009 | - |
| 3.1463 | 15700 | 0.0008 | - |
| 3.1563 | 15750 | 0.0009 | - |
| 3.1663 | 15800 | 0.0009 | - |
| 3.1764 | 15850 | 0.0008 | - |
| 3.1864 | 15900 | 0.0008 | - |
| 3.1964 | 15950 | 0.0009 | - |
| 3.2064 | 16000 | 0.0009 | - |
| 3.2164 | 16050 | 0.0033 | - |
| 3.2265 | 16100 | 0.0031 | - |
| 3.2365 | 16150 | 0.0008 | - |
| 3.2465 | 16200 | 0.0008 | - |
| 3.2565 | 16250 | 0.0008 | - |
| 3.2665 | 16300 | 0.0008 | - |
| 3.2766 | 16350 | 0.0008 | - |
| 3.2866 | 16400 | 0.0008 | - |
| 3.2966 | 16450 | 0.0008 | - |
| 3.3066 | 16500 | 0.0009 | - |
| 3.3166 | 16550 | 0.0008 | - |
| 3.3267 | 16600 | 0.0032 | - |
| 3.3367 | 16650 | 0.0008 | - |
| 3.3467 | 16700 | 0.0008 | - |
| 3.3567 | 16750 | 0.0009 | - |
| 3.3667 | 16800 | 0.0031 | - |
| 3.3768 | 16850 | 0.0009 | - |
| 3.3868 | 16900 | 0.0008 | - |
| 3.3968 | 16950 | 0.0009 | - |
| 3.4068 | 17000 | 0.0009 | - |
| 3.4168 | 17050 | 0.0008 | - |
| 3.4269 | 17100 | 0.0009 | - |
| 3.4369 | 17150 | 0.0031 | - |
| 3.4469 | 17200 | 0.0032 | - |
| 3.4569 | 17250 | 0.0008 | - |
| 3.4669 | 17300 | 0.0008 | - |
| 3.4770 | 17350 | 0.0008 | - |
| 3.4870 | 17400 | 0.0008 | - |
| 3.4970 | 17450 | 0.0057 | - |
| 3.5070 | 17500 | 0.0032 | - |
| 3.5170 | 17550 | 0.0009 | - |
| 3.5271 | 17600 | 0.0052 | - |
| 3.5371 | 17650 | 0.0008 | - |
| 3.5471 | 17700 | 0.0009 | - |
| 3.5571 | 17750 | 0.0008 | - |
| 3.5671 | 17800 | 0.0008 | - |
| 3.5772 | 17850 | 0.0008 | - |
| 3.5872 | 17900 | 0.0008 | - |
| 3.5972 | 17950 | 0.0009 | - |
| 3.6072 | 18000 | 0.0032 | - |
| 3.6172 | 18050 | 0.0008 | - |
| 3.6273 | 18100 | 0.0008 | - |
| 3.6373 | 18150 | 0.0008 | - |
| 3.6473 | 18200 | 0.0008 | - |
| 3.6573 | 18250 | 0.0008 | - |
| 3.6673 | 18300 | 0.0008 | - |
| 3.6774 | 18350 | 0.0008 | - |
| 3.6874 | 18400 | 0.0008 | - |
| 3.6974 | 18450 | 0.0008 | - |
| 3.7074 | 18500 | 0.0008 | - |
| 3.7174 | 18550 | 0.0007 | - |
| 3.7275 | 18600 | 0.0008 | - |
| 3.7375 | 18650 | 0.0008 | - |
| 3.7475 | 18700 | 0.003 | - |
| 3.7575 | 18750 | 0.0008 | - |
| 3.7675 | 18800 | 0.0008 | - |
| 3.7776 | 18850 | 0.0008 | - |
| 3.7876 | 18900 | 0.0007 | - |
| 3.7976 | 18950 | 0.0008 | - |
| 3.8076 | 19000 | 0.0007 | - |
| 3.8176 | 19050 | 0.0007 | - |
| 3.8277 | 19100 | 0.0029 | - |
| 3.8377 | 19150 | 0.0007 | - |
| 3.8477 | 19200 | 0.0008 | - |
| 3.8577 | 19250 | 0.0031 | - |
| 3.8677 | 19300 | 0.0007 | - |
| 3.8778 | 19350 | 0.0007 | - |
| 3.8878 | 19400 | 0.0008 | - |
| 3.8978 | 19450 | 0.0008 | - |
| 3.9078 | 19500 | 0.0031 | - |
| 3.9178 | 19550 | 0.0008 | - |
| 3.9279 | 19600 | 0.0008 | - |
| 3.9379 | 19650 | 0.0007 | - |
| 3.9479 | 19700 | 0.0008 | - |
| 3.9579 | 19750 | 0.0008 | - |
| 3.9679 | 19800 | 0.0008 | - |
| 3.9780 | 19850 | 0.0008 | - |
| 3.9880 | 19900 | 0.0008 | - |
| 3.9980 | 19950 | 0.0007 | - |
| 4.0 | 19960 | - | 0.1050 |
### Framework Versions
- Python: 3.11.10
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.4.1+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 43 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Latitude | <ul><li>'Latitude: 48,87217700, 48,85543800, 48,87416100, 48,87322500, 48,87422500, 48,84189000, 48,86617200, 48,87112100, 48,86552200, 48,87623100, 48,85609000, 48,85642700, 48,86853300, 48,87465400, 48,86995000, 48,85654000, 48,87022000, 48,86962600, 48,85663200, 48,83476200'</li><li>'lat: 40.7940823884086, 40.7948509408039, 40.7667178072558, 40.7697032606755, 40.797533370163, 40.7902561000937, 40.7693045133578, 40.7942883045566, 40.7729752391435, 40.7903128889029, 40.7762126854894, 40.7725908847499, 40.7931811701082, 40.7917367820255, 40.7829723919744, 40.7742879599026, 40.7823507678183, 40.7919669739962, 40.7702795904962, 40.7698124821507'</li><li>'lat: 83.92115933668057, 89.53277415300325, 85.37696959908148, 85.44622332365381, 84.28538158324413, 87.96664079539569, 86.11414393337242, 85.43864590316868, 87.65474214915454, 81.67725407101064, 90.47817498708324, 89.87993043195812, 81.56791356025577, 88.48808747114165, 89.3843538611984, 87.5218603199103, 83.99238693700401, 82.50195719071465, 85.84865551792468, 87.92121711225418'</li></ul> |
| Categorical | <ul><li>'SUSPECT_RACE_DESCRIPTION: (null), WHITE, BLACK HISPANIC, BLACK, WHITE HISPANIC, ASIAN/PAC.ISL, AMER IND, MALE'</li><li>'OFFICER_IN_UNIFORM_FLAG: Y, N, ('</li><li>'SUSPECT_HAIR_COLOR: BLK, BRO, BLD, XXX, (null), GRY, WHI, BLN, RED, ZZZ, PLE, GRN, SDY, ORG, BK, BA, BR, XX'</li></ul> |
| Day of Month | <ul><li>'Date.Day: 26, 24, 31, 7, 14, 21, 28, 5, 12, 19, 2, 9, 16, 23, 30, 4, 11, 18, 25, 1'</li><li>'Incident.Date.Day: 2, 3, 4, 5, 6, 7, 8, 9, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23'</li><li>'bibliography.publication.day: 1, 17, 16, 20, 29, 10, 14, 11, 9, 18, 19, 22, 25, 15, 6, 28, 27, 2, 12, 21'</li></ul> |
| Year | <ul><li>'Year: 2020, 2019, 2018, 2017, 2016, 2015, 2014'</li><li>'YEAR2: 2017'</li><li>'artist.birth.year: 1930, 1852, 1898, 1760, 1935, 1964, 1967, 1940, 1947, 1938, 1728, 1868, 1927, 1917, 1878, 1895, 1904, 1912, 1899, 1767'</li></ul> |
| Integer | <ul><li>'unfavorable: 51.0, 55.0, 47.0, 60.0, 61.0, 56.0, 58.0, 57.0, 59.0, 61.9, 62.0, 54.0, 52.0, 53.0, 66.0, 67.0, 63.0, 49.0, 52.42, 56.9'</li><li>'Data.Totals.Violent.Rape: 281, 252, 218, 192, 397, 367, 341, 371, 396, 494, 637, 661, 660, 751, 811, 738, 794, 929, 954, 1037'</li><li>'AVG: 93429, 78009, 76358, 76258, 75606, 73515, 72885, 71625, 71624, 70534, 69914, 69879, 69671, 68950, 68857, 68846, 68833, 68387, 68099, 67431'</li></ul> |
| Floating Point Number | <ul><li>'dimensions.width: 0.0, 305.0, 250.0, 756.0, 2095.0, 480.0, 858.0, 558.0, 628.0, 302.0, 1226.0, 1270.0, 940.0, 2276.0, 864.0, 1420.0, 330.0, 267.0, 1225.0, 660.0'</li><li>'Data.Fiber: 0.0, 0.2, 0.3, 0.4, 0.7, 0.1, 1.0, 0.6, 0.5, 1.9, 1.1, 2.3, 0.8, 1.6, 0.9, 1.2, 37.0, 4.5, 9.1, 1.5'</li><li>' "Weight(Pounds)": 112.9925, 136.4873, 153.0269, 142.3354, 144.2971, 123.3024, 141.4947, 136.4623, 112.3723, 120.6672, 127.4516, 114.143, 125.6107, 122.4618, 116.0866, 139.9975, 129.5023, 142.9733, 137.9025, 124.0449'</li></ul> |
| Percentage | <ul><li>'pct: 51.0, 48.0, 44.2, 49.0, 48.4, 49.2, 1.4, 47.0, 48.2, 1.6, 50.0, 42.0, 1.0, 40.0, 53.0, 43.0, 46.0, 52.0, 45.0, 47.3'</li><li>'pct: 51.6, 41.4, 45.7, 46.8, 5.2, 46.0, 2.0, 48.0, 47.0, 44.0, 4.0, 5.0, 53.0, 55.5, 32.2, 54.7, 40.5, 54.3, 43.7, 45.0'</li><li>'PCT.2: 95.5, 96.5, 94.0, 99.4, 97.6, 100.9, 101.0, 101.1, 96.9, 98.0, 97.9, 98.1, 94.8, 100.7, 99.3, 97.1, 98.9, 98.7, 96.1, 99.7'</li></ul> |
| Secondary Address | <ul><li>'STOP_LOCATION_APARTMENT: (null), 2, 7, 4TH, 2FL, ROOF, ROOF T, BASEME, LOBBY, 17TH, 2 FLOO, 12, 1701, HALLWA, 1E, 5D, SIDEWA, FRONT, 12C, None'</li></ul> |
| U.S. State Abbreviation | <ul><li>'abbrev: AL, AK, AZ, AR, CA, CO, CT, DE, DC, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME'</li><li>'recipient_st: AK, AL, AR, AZ, CA, CO, CT, DC, FL, GA, HI, IA, ID, IL, IN, KA, KS, KY, LA, MA'</li><li>'Incident.Location.State: WA, OR, KS, CA, CO, OK, AZ, IA, PA, TX, OH, LA, MT, UT, AR, IL, NV, NM, MN, MO'</li></ul> |
| Numeric identifier | <ul><li>'pollster_id: 568, 1189, 1508, 1302, 1597, 396, 458, 1699, 1361, 169, 1075, 1406, 241, 1523, 399, 1351, 1528, 1365, 1347, 57'</li><li>'SUPERVISING_OFFICER_COMMAND_CODE: 574, 863, 1, 861, 5, 6, 234, 849, 136, 7, 804, 750, 868, 9, 108, 13, 10, 181, 598, 230'</li><li>'pollster_rating_id: 245, 609, 48, 437, 88, 599, 600, 263, 280, 314, 124, 357, 667, 317, 494, 325, 522, 556, 593, 216'</li></ul> |
| Month Number | <ul><li>'mp_month: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12'</li><li>'bibliography.publication.month: 6, 11, 3, 8, 1, 10, 7, 2, 4, 5, 9, 12'</li><li>'Incident.Date.Month: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12'</li></ul> |
| Date | <ul><li>'Date.Full: 8/26/1990, 3/24/1991, 3/31/1991, 4/7/1991, 4/14/1991, 4/21/1991, 4/28/1991, 5/5/1991, 5/12/1991, 5/19/1991, 5/26/1991, 6/2/1991, 6/9/1991, 6/16/1991, 6/23/1991, 6/30/1991, 7/7/1991, 7/14/1991, 7/21/1991, 7/28/1991'</li><li>'STOP_FRISK_DATE: 1/16/2017, 2/8/2017, 2/20/2017, 2/21/2017, 2/17/2017, 2/25/2017, 3/3/2017, 3/16/2017, 3/31/2017, 4/2/2017, 4/4/2017, 3/24/2017, 4/6/2017, 4/18/2017, 5/6/2017, 5/10/2017, 5/17/2017, 5/7/2017, 5/24/2017, 6/8/2017'</li><li>'disb_dt: 15-Sep-15, 16-Nov-15, 30-Sep-15, 18-Dec-15, 22-Oct-15, 3-Dec-15, 23-Nov-15, 29-Feb-16, 18-Mar-16, 27-Feb-16, 17-Feb-16, 25-Feb-16, 25-Jan-16, 14-Jan-16, 12-Jan-16, 22-Jan-16, 1-Jan-16, 3-Jan-16, 6-Jan-16, 11-Jan-16'</li></ul> |
| Full Name | <ul><li>'candidate_name: Abigail A. Spanberger, Nicholas J. Freitas, Kara Eastman, Don Bacon, Tyler Schaeffer, Jill Schupp, Ann Wagner, Martin Schulte, Dana Balter, John Katko, Steve Williams, Christina Hale, Victoria Spartz, Kenneth Tucker, Joyce Ann Elliott, French Hill, Jared Forrest Golden, Dale John Crafts, Marie Newman, Mike Fricilone'</li><li>'bibliography.author.name: Austen, Jane, Gilman, Charlotte Perkins, Carroll, Lewis, Shelley, Mary Wollstonecraft, Kafka, Franz, Twain, Mark, Wilde, Oscar, Douglass, Frederick, Ibsen, Henrik, Melville, Herman, Doyle, Arthur Conan, Dickens, Charles, Joyce, James, Swift, Jonathan, Stoker, Bram, Machiavelli, Niccolo, Tolstoy, Leo, graf, Grimm, Wilhelm, Vatsyayana, Unknown'</li><li>'Name: Cristiano Ronaldo, L. Messi, Neymar, L. Suárez, M. Neuer, R. Lewandowski, De Gea, E. Hazard, T. Kroos, G. Higuaín, Sergio Ramos, K. De Bruyne, T. Courtois, A. Sánchez, L. Modrić, G. Bale, S. Agüero, G. Chiellini, G. Buffon, P. Dybala'</li></ul> |
| Day of Week | <ul><li>'DAY2: Monday, Wednesday, Tuesday, Friday, Saturday, Thursday, Sunday'</li><li>'DAY2: Monday, Wednesday, Tuesday, Friday, Saturday, Thursday, Sunday'</li><li>'day: Sun, Sat, Thur, Fri'</li></ul> |
| Timestamp | <ul><li>'created_at: 12/30/20 12:29, 11/2/20 21:26, 11/2/20 22:16, 11/2/20 21:32, 11/2/20 22:01, 11/2/20 22:18, 11/2/20 22:26, 11/2/20 23:31, 11/2/20 21:49, 10/31/20 17:22, 11/1/20 14:39, 11/2/20 08:22, 10/29/20 14:16, 10/31/20 08:36, 10/29/20 11:08, 10/29/20 09:00, 10/29/20 16:13, 10/29/20 16:14, 10/30/20 15:45, 10/28/20 09:24'</li><li>'created_at: 12/21/22 09:28, 12/21/22 12:52, 12/16/22 18:27, 12/16/22 21:10, 12/14/22 10:39, 12/14/22 08:22, 12/15/22 18:31, 12/14/22 14:13, 12/13/22 09:36, 12/14/22 08:23, 12/14/22 15:40, 12/15/22 09:40, 12/7/22 10:47, 12/7/22 08:17, 12/7/22 17:56, 12/15/22 09:50, 11/30/22 09:25, 11/23/22 08:46, 12/1/22 09:39, 12/5/22 08:29'</li><li>'created_at: 12/21/22 09:28, 12/21/22 12:52, 12/16/22 18:27, 12/16/22 21:10, 12/14/22 10:39, 12/14/22 08:22, 12/15/22 18:31, 12/14/22 14:13, 12/13/22 09:36, 12/14/22 08:23, 12/14/22 15:40, 12/15/22 09:40, 12/7/22 10:47, 12/7/22 08:17, 12/7/22 17:56, 12/15/22 09:50, 11/30/22 09:25, 11/23/22 08:46, 12/1/22 09:39, 12/5/22 08:29'</li></ul> |
| URL | <ul><li>'url: https://docs.cdn.yougov.com/c5o6xiw8t9/econtoplines.pdf, https://docs.cdn.yougov.com/by8wjw1hur/econTabReport.pdf, https://subscriber.politicopro.com/newsletter/2022/12/who-supports-the-payment-pause-00072673, https://news.yahoo.com/new-poll-shows-stark-partisan-divide-when-it-comes-to-americans-view-of-schools-132510314.html, https://harvardharrispoll.com/wp-content/uploads/2022/12/HHP_Dec2022_KeyResults.pdf, https://echeloninsights.com/in-the-news/december-2022-omnibus-2/, https://docs.cdn.yougov.com/b94ttrxy3v/econtoplines.pdf, https://docs.cdn.yougov.com/urts2xadfd/econTabReport.pdf, https://www.foxnews.com/politics/fox-news-poll-americans-show-little-enthusiasm-biden-trump-rematch-2024, https://poll.qu.edu/poll-release?releaseid=3863, https://www.usatoday.com/story/news/politics/2022/12/13/trump-support-gop-2024-presidential-race-poll/10882346002/, https://www.wsj.com/articles/ron-desantis-holds-early-lead-over-donald-trump-among-gop-primary-voters-wsj-poll-shows-11670989311?mod=hp_lead_pos5, https://www.cnn.com/2022/12/14/politics/biden-trump-2024-poll/index.html, https://docs.cdn.yougov.com/dshfq4wqyr/econtoplines.pdf, https://docs.cdn.yougov.com/qsanp37uhh/econTabReport.pdf, https://www.yahoo.com/news/poll-trump-loses-ground-with-republicans-after-kanye-west-nick-fuentes-dinner-100204385.html, https://docs.cdn.yougov.com/regijt79ge/November_Georgia_Ads_joined_earlier.pdf, https://docs.cdn.yougov.com/xcvx6iu6em/November_Georgia_Ads_joined.pdf, https://news.gallup.com/poll/406892/party-images-stable-midterm-elections.aspx, https://docs.cdn.yougov.com/2zebcvjoec/econtoplines.pdf'</li><li>'Flag: https://cdn.sofifa.org/flags/38.png, https://cdn.sofifa.org/flags/52.png, https://cdn.sofifa.org/flags/54.png, https://cdn.sofifa.org/flags/60.png, https://cdn.sofifa.org/flags/21.png, https://cdn.sofifa.org/flags/37.png, https://cdn.sofifa.org/flags/45.png, https://cdn.sofifa.org/flags/7.png, https://cdn.sofifa.org/flags/55.png, https://cdn.sofifa.org/flags/10.png, https://cdn.sofifa.org/flags/50.png, https://cdn.sofifa.org/flags/27.png, https://cdn.sofifa.org/flags/44.png, https://cdn.sofifa.org/flags/18.png, https://cdn.sofifa.org/flags/115.png, https://cdn.sofifa.org/flags/46.png, https://cdn.sofifa.org/flags/34.png, https://cdn.sofifa.org/flags/13.png, https://cdn.sofifa.org/flags/43.png, https://cdn.sofifa.org/flags/14.png'</li><li>'Photo: https://cdn.sofifa.org/48/18/players/20801.png, https://cdn.sofifa.org/48/18/players/158023.png, https://cdn.sofifa.org/48/18/players/190871.png, https://cdn.sofifa.org/48/18/players/176580.png, https://cdn.sofifa.org/48/18/players/167495.png, https://cdn.sofifa.org/48/18/players/188545.png, https://cdn.sofifa.org/48/18/players/193080.png, https://cdn.sofifa.org/48/18/players/183277.png, https://cdn.sofifa.org/48/18/players/182521.png, https://cdn.sofifa.org/48/18/players/167664.png, https://cdn.sofifa.org/48/18/players/155862.png, https://cdn.sofifa.org/48/18/players/192985.png, https://cdn.sofifa.org/48/18/players/192119.png, https://cdn.sofifa.org/48/18/players/184941.png, https://cdn.sofifa.org/48/18/players/177003.png, https://cdn.sofifa.org/48/18/players/173731.png, https://cdn.sofifa.org/48/18/players/153079.png, https://cdn.sofifa.org/48/18/players/138956.png, https://cdn.sofifa.org/48/18/players/1179.png, https://cdn.sofifa.org/48/18/players/211110.png'</li></ul> |
| Street Address | <ul><li>'STOP_LOCATION_FULL_ADDRESS: 180 GREENWICH STREET, WALL STREET && BROADWAY, 75 GREENE STREET, 429 WEST BROADWAY, WEST STREET && CHAMBERS STREET, CHAMBERS STREET && WEST BROADWAY, CORTLANDT STREET && CHURCH STREET, 111 FULTON STREET, 25 CLIFF STREET, SPRING STREET && AVENUE OF THE AMERICAS, 130 CEDAR STREET, 225 LIBERTY STREET, BARCLAY STREET && WEST STREET, 153 GREENWICH STREET, BATTERY PLACE && STATE STREET, MERCER STREET && BROOME STREET, WEST STREET && CANAL STREET, BROADWAY && PRINCE STREET, WEST BROADWAY && AVENUE OF THE AMERICAS, 3 SOUTH STREET'</li><li>'STOP_LOCATION_FULL_ADDRESS: 180 GREENWICH STREET, WALL STREET && BROADWAY, 75 GREENE STREET, 429 WEST BROADWAY, WEST STREET && CHAMBERS STREET, CHAMBERS STREET && WEST BROADWAY, CORTLANDT STREET && CHURCH STREET, 111 FULTON STREET, 25 CLIFF STREET, SPRING STREET && AVENUE OF THE AMERICAS, 130 CEDAR STREET, 225 LIBERTY STREET, BARCLAY STREET && WEST STREET, 153 GREENWICH STREET, BATTERY PLACE && STATE STREET, MERCER STREET && BROOME STREET, WEST STREET && CANAL STREET, BROADWAY && PRINCE STREET, WEST BROADWAY && AVENUE OF THE AMERICAS, 3 SOUTH STREET'</li><li>'STOP_LOCATION_FULL_ADDRESS: 180 GREENWICH STREET, WALL STREET && BROADWAY, 75 GREENE STREET, 429 WEST BROADWAY, WEST STREET && CHAMBERS STREET, CHAMBERS STREET && WEST BROADWAY, CORTLANDT STREET && CHURCH STREET, 111 FULTON STREET, 25 CLIFF STREET, SPRING STREET && AVENUE OF THE AMERICAS, 130 CEDAR STREET, 225 LIBERTY STREET, BARCLAY STREET && WEST STREET, 153 GREENWICH STREET, BATTERY PLACE && STATE STREET, MERCER STREET && BROOME STREET, WEST STREET && CANAL STREET, BROADWAY && PRINCE STREET, WEST BROADWAY && AVENUE OF THE AMERICAS, 3 SOUTH STREET'</li></ul> |
| Country ISO Code | <ul><li>'Champion Nationality: AUS, FRA, GBR, NZL, USA, SRB, SUI, SWE, CZE, ESP, GER, NED, CRO, BRA, RUS'</li><li>'Runner-up Nationality: AUS, GBR, NZL, FRA, USA, RSA, CZE, ARG, GER, SUI, ESP, CRO, ROM, DEN, TCH, URS, CZ, SRB, CND, SWE'</li></ul> |
| Partial timestamp | <ul><li>'created_at: 12/17/20 21:39, 6/14/21 15:36, 11/2/20 09:02, 11/2/20 12:49, 11/2/20 19:02, 11/2/20 14:04, 11/2/20 17:37, 11/2/20 18:39, 11/2/20 18:40, 11/4/20 09:17, 11/4/20 10:29, 11/4/20 10:32, 11/4/20 10:38, 11/4/20 10:39, 11/28/20 21:14, 11/2/20 21:25, 11/2/20 21:32, 11/2/20 22:12, 11/2/20 23:30, 11/2/20 23:33'</li><li>'bibliography.publication.full: June, 1998, November, 1999, March, 1994, June 17, 2008, August 16, 2005, August 20, 2006, August 29, 2006, January 10, 2006, March, 2001, June, 2001, October 14, 1892, July, 1998, July, 2003, January, 1994, October 1997, August 16, 2013, February 11, 2006, June 9, 2008, January 1, 1870, April, 2001'</li><li>'Rating.Experience: Below, Same, None, Above'</li></ul> |
| Longitude | <ul><li>'Longitude: 2,77228900, 2,77461100, 2,77370600, 2,77423900, 2,77654400, 2,79937600, 2,78064700, 2,77697400, 2,78928200, 2,78032200, 2,77731200, 2,77121300, 2,77167600, 2,78236500, 2,76694300, 2,77139500, 2,76872200, 2,76741500, 2,77156700, 2,82065100'</li><li>'Longitude: 6.85, 2.97, 2.53, -4.02, 10.87, 11.93, 12.7, 14.139, 14.426, 13.897, 14.83, 15.213, 15.064, 14.933, 14.962, 14.999, 12.02, 14.399, 23.336, 24.439'</li><li>'long: 40.65531753386127, 35.52146509142811, 41.04610174058556, 37.25718863973695, 37.73038191275334, 38.78755702518432, 36.31538469187874, 38.3542649521305, 40.33741738725765, 36.831052736369664, 37.39711396680899, 38.28297641253209, 40.25037415629944, 39.12501528359793, 40.179108531876246, 38.165405118101205, 40.28234452941448, 37.1590112746327, 40.08056518798263, 38.45329795732872'</li></ul> |
| Country Name | <ul><li>'Geography: United States'</li><li>'location.citizenship: United States, Mexico, Switzerland, Spain, Hong Kong, Taiwan, Germany, Saudi Arabia, Japan, Sweden, France, Canada, Philippines, Indonesia, South Korea, Malaysia, Italy, Singapore, Nigeria, Brazil'</li><li>'Nation: Afghanistan, Albania, Algeria, Andorra, Angola, Antigua and Barbuda, Argentina, Armenia, Australia, Austria, Azerbaijan, Bahamas, The, Bahrain, Bangladesh, Barbados, Belarus, Belgium, Belize, Benin, Bhutan'</li></ul> |
| Boolean | <ul><li>'nationwide_batch: False'</li><li>'chasing: False, True'</li><li>'ID Workforce Status: True'</li></ul> |
| Short text | <ul><li>'data.title: Backs, Illustration to Judith Shakespeare, Tri-Boro Barber Shop, Portrait of the Engraver Francesco Bartolozzi, Lady in Niche, Assassinations, Zebe, TRANSFERENCE ZONE, Five Sunsets in One Hour, Cartmel Fell, Composition: River in a Gorge, Winters Sleep, Christs Cross and Adams Tree, Space Construction with a Spiral, Toy Sailing Boats, the Round Pond, The Mutilated, The Cruise, Cut Bottle Relief, The Struggle, Figures in a Garden'</li><li>'specific_location: None, on tree stump, on tree roots, under a tree, in b/w trees, tree, Branch, bush, hiding in bushes, On a rock, trash can, Under bench, near 65th St arch, in the European Beech, on tree knob, Behind fence, tree, tree near large rock on Bridle Path, Climbing tree, "FIELD", bottom of tree'</li><li>"Facility.Name: Southeast Alabama Medical Center, Marshall Medical Center South, Eliza Coffee Memorial Hospital, Mizell Memorial Hospital, Crenshaw Community Hospital, St Vincent's East, Dekalb Regional Medical Center, Shelby Baptist Medical Center, Callahan Eye Hospital, Helen Keller Memorial Hospital, Dale Medical Center, Floyd Cherokee Medical Center, Baptist Medical Center South, Jackson Hospital & Clinic Inc, East Alabama Medical Center, Tanner Medical Center-east Alabama, University Of Alabama Hospital, Community Hospital Inc, Cullman Regional Medical Center, Andalusia Health"</li></ul> |
| Slug | <ul><li>'Slug County: baldwin-county-al, calhoun-county-al, coffee-county-al, colbert-county-al, covington-county-al, cullman-county-al, dale-county-al, dallas-county-al, etowah-county-al, jackson-county-al, jefferson-county-al, lee-county-al, limestone-county-al, madison-county-al, marshall-county-al, mobile-county-al, montgomery-county-al, perry-county-al, pike-county-al, randolph-county-al'</li><li>'Slug Geography: united-states'</li><li>'Slug Detailed Occupation: physicians, physicians-surgeons, lawyers-judges-magistrates-other-judicial-workers, medical-health-services-managers, chief-executives-legislators, veterinarians, social-community-service-managers, securities-commodities-financial-services-sales-agents, petroleum-mining-geological-engineers-including-mining-safety-engineers, economists, miscellaneous-social-scientists-including-survey-researchers-sociologists, natural-sciences-managers, geoscientists-and-hydrologists-except-geographers, detectives-criminal-investigators, judicial-law-clerks, other-psychologists, architectural-engineering-managers, education-administrators, astronomers-physicists, public-relations-and-fundraising-managers'</li></ul> |
| Postal Code | <ul><li>'Code postal: 77700.0, nan'</li></ul> |
| Structured field | <ul><li>'SUSPECT_HEIGHT: 5.8, 6.2, 5.1, 5, 5.11, 5.5, 5.4, 5.7, 6.1, 6, 6.3, 5.6, 5.9, 6.4, 5.2, 6.5, 5.3, 4.11, , 2.2'</li><li>'SUSPECT_HEIGHT: 5.8, 6.2, 5.1, 5, 5.11, 5.5, 5.4, 5.7, 6.1, 6, 6.3, 5.6, 5.9, 6.4, 5.2, 6.5, 5.3, 4.11, , 2.2'</li><li>'Score: 6-3, 7-5, 6-2, 6-3, 6-4, 6-2, 6-4, 6-2, 6-2, 6-4, 6-4, 7-5, 6-1, 3-6, 6-1, 3-6, 6-4, 6-3, 6-3, 4-6, 8-6, 8-6, 6-1, 6-3, 8-6, 6-2, 6-4, 6-3, 6-2, 4-6, 3-6, 6-4, 6-8, 1-6, 6-2, 6-2, 6-2, 4-6, 7-5, 6-4, 6-4, 1-6, 6-3, 3-6, 6-4, 6-4, 6-4, 6-3, 3-6, 6-0, 6-1, 7-5, 8-6, 6-4, 4-6, 6-2, 6-3, 7-5, 6-3, 6-0, 6-2, 6-8, 5-7, 8-6, 6-3, 10-8, 6-2, 8-6, 3-6, 6-1, 6-3, 6-2, 6-3, 7-5, 6-0'</li></ul> |
| Alphanumeric identifier | <ul><li>'ID Geography: 04000US04, 04000US06, 04000US32, 04000US41'</li><li>'ID County: 05000US01003, 05000US01015, 05000US01031, 05000US01033, 05000US01039, 05000US01043, 05000US01045, 05000US01047, 05000US01055, 05000US01071, 05000US01073, 05000US01081, 05000US01083, 05000US01089, 05000US01095, 05000US01097, 05000US01101, 05000US01105, 05000US01109, 05000US01111'</li><li>'ID Geography: 01000US, 04000US04, 04000US06, 04000US32, 04000US41, 31000US31080, 31000US40140, 31000US41740, 31000US41860'</li></ul> |
| Color | <ul><li>'color: Yellow, Black, White'</li><li>'primary_fur_color: None, Gray, Cinnamon, Black'</li><li>'highlight_fur_color: None, Cinnamon, White, Gray, Cinnamon, White, Gray, White, Black, Cinnamon, White, Black, Black, White, Black, Cinnamon, Gray, Black'</li></ul> |
| Month Name | <ul><li>'MONTH2: January, February, March, April, May, June, July, August, September, October, November, December'</li><li>'bibliography.publication.month name: June, November, March, August, January, October, July, February, April, May, September, December'</li><li>'MONTH2: January, February, March, April, May, June, July, August, September, October, November, December'</li></ul> |
| Currency Code | <ul><li>'cur_name: AFN, DZD, AOA, ARS, AMD, AZN, BDT, INR, BYR, XOF, BTN, BOB, BIF, KHR, XAF, CVE, CNY, COP, USD, CDF'</li></ul> |
| Time | <ul><li>'STOP_FRISK_TIME: 14:26:00, 11:10:00, 11:35:00, 13:20:00, 21:25:00, 20:00:00, 19:58:00, 13:15:00, 8:16:00, 18:44:00, 22:30:00, 4:45:00, 18:30:00, 0:00:00, 9:58:00, 11:15:00, 13:00:00, 8:00:00, 14:57:00, 4:15:00'</li><li>'STOP_FRISK_TIME: 14:26:00, 11:10:00, 11:35:00, 13:20:00, 21:25:00, 20:00:00, 19:58:00, 13:15:00, 8:16:00, 18:44:00, 22:30:00, 4:45:00, 18:30:00, 0:00:00, 9:58:00, 11:15:00, 13:00:00, 8:00:00, 14:57:00, 4:15:00'</li><li>'STOP_FRISK_TIME: 14:26:00, 11:10:00, 11:35:00, 13:20:00, 21:25:00, 20:00:00, 19:58:00, 13:15:00, 8:16:00, 18:44:00, 22:30:00, 4:45:00, 18:30:00, 0:00:00, 9:58:00, 11:15:00, 13:00:00, 8:00:00, 14:57:00, 4:15:00'</li></ul> |
| Last Name | <ul><li>'candidat: Bush, Perot, Clinton'</li><li>'answer: Spanberger, Freitas, Eastman, Bacon, Schaeffer, Schupp, Wagner, Schulte, Balter, Katko, Williams, Hale, Spartz, Tucker, Elliott, Hill, Golden, Crafts, Newman, Fricilone'</li></ul> |
| U.S. State | <ul><li>'Slug Geography: california'</li><li>'state_name: Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, District of Columbia, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine'</li><li>'state: Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland'</li></ul> |
| Street Name | <ul><li>'STOP_LOCATION_STREET_NAME: GREENWICH STREET, WALL STREET, GREENE STREET, WEST BROADWAY, WEST STREET, CHAMBERS STREET, CORTLANDT STREET, FULTON STREET, CLIFF STREET, SPRING STREET, CEDAR STREET, LIBERTY STREET, BARCLAY STREET, BATTERY PLACE, MERCER STREET, BROADWAY, SOUTH STREET, THOMPSON STREET, JAY STREET, CHURCH STREET'</li><li>'STOP_LOCATION_STREET_NAME: GREENWICH STREET, WALL STREET, GREENE STREET, WEST BROADWAY, WEST STREET, CHAMBERS STREET, CORTLANDT STREET, FULTON STREET, CLIFF STREET, SPRING STREET, CEDAR STREET, LIBERTY STREET, BARCLAY STREET, BATTERY PLACE, MERCER STREET, BROADWAY, SOUTH STREET, THOMPSON STREET, JAY STREET, CHURCH STREET'</li><li>'STOP_LOCATION_STREET_NAME: GREENWICH STREET, WALL STREET, GREENE STREET, WEST BROADWAY, WEST STREET, CHAMBERS STREET, CORTLANDT STREET, FULTON STREET, CLIFF STREET, SPRING STREET, CEDAR STREET, LIBERTY STREET, BARCLAY STREET, BATTERY PLACE, MERCER STREET, BROADWAY, SOUTH STREET, THOMPSON STREET, JAY STREET, CHURCH STREET'</li></ul> |
| AM/PM | <ul><li>'shift: PM, AM'</li></ul> |
| Occupation | <ul><li>'Detailed Occupation: Physicians, Physicians & surgeons, Lawyers, & judges, magistrates, & other judicial workers, Medical & health services managers, Chief executives & legislators, Veterinarians, Social & community service managers, Securities, commodities, & financial services sales agents, Petroleum, mining & geological engineers, including mining safety engineers, Economists, Miscellaneous social scientists, including survey researchers & sociologists, Natural sciences managers, Geoscientists and hydrologists, except geographers, Detectives & criminal investigators, Judicial law clerks, Other psychologists, Architectural & engineering managers, Education administrators, Astronomers & physicists, Public relations and fundraising managers'</li><li>'occupation: Operatives, Craftsmen, Sales, Other, Managers/admin, Professional/technical, Clerical/unskilled, Laborers, Transport, Service, nan, Household workers, Farm laborers, Farmers'</li><li>'Detailed Occupation: Other managers, Cashiers, Retail salespersons, Driver/sales workers & truck drivers, Registered nurses'</li></ul> |
| Zip Code | <ul><li>'recipient_zip: 995084442, 99503, 995163436, 352124572, 35216, 35976, 358021277, 352174710, 35203, 35233, 35805, 72716, 72201, 72035, 72015, 72223, 72019, 72113, 72758, 72227'</li><li>'STOP_LOCATION_ZIP_CODE: (null), 20292, AVENUE, 5 AVEN, 10019, 22768, 10035, 10026, 10128, 24231, 10030, 10039, 23874, 11213, 11233, 100652, 10451, 23543, 100745, PROSPE'</li><li>'zip_codes: nan, 12081.0, 10090.0, 12423.0, 12420.0'</li></ul> |
| Company Name | <ul><li>"company.name: Microsoft, Berkshire Hathaway, Telmex, F. Hoffmann-La Roche, Zara, Henderson Land Development, Oracle, Lin Yuan Group, Aldi, Sun Hung Kai Properties, Kingdom Holding Company, Koch industries, Cheung king, Walmart, Seibu Corporation, Las Vegas Sands, Aldi Nord, Tetra Pak, BMW, L'Oreal"</li></ul> |
| First Name | <ul><li>'Top Name: Mary, Linda, Debra, Lisa, Michelle, Jennifer, Jessica, Samantha, Ashley, Hannah, Emily, Madison, Emma, Isabella, Sophia, Olivia, John, Robert, James, David'</li></ul> |
| Very short text | <ul><li>'above_ground_sighter_measurement: None, FALSE, 4, 3, 30, 10, 6, 24, 8, 25, 5, 50, 70, 12, 2, 20, 7, 13, 15, 28'</li><li>'review_reason_code: 2, 1, 4, None, 5, 3, 7, 3?, 8, D, ?, 3, 1, 1 or 2, D or 1, 7B, 1, 2, 1 OR 2, D OR 2, B, 4?'</li><li>'status: N, Y, REMOVE, None, 1, ?, H, R, M, T'</li></ul> |
| License Plate | <ul><li>'plate: AZIZ714, BATBOX1, BBOMBS, BEACHY1, BLK PWR5, BOT TAK, CHERIPI, CIO FTW, DAVES88, DMOBGFY, DOITFKR, EGGPUTT, F DIABDZ, FJ 666, FKK OFF, FKN BLAK, FLT ATCK, F LUPUS, HVNNHEL, H8DES'</li></ul> |
| URI | <ul><li>'url: https://docs.cdn.yougov.com/c5o6xiw8t9/econtoplines.pdf, https://docs.cdn.yougov.com/by8wjw1hur/econTabReport.pdf, https://subscriber.politicopro.com/newsletter/2022/12/who-supports-the-payment-pause-00072673, https://news.yahoo.com/new-poll-shows-stark-partisan-divide-when-it-comes-to-americans-view-of-schools-132510314.html, https://harvardharrispoll.com/wp-content/uploads/2022/12/HHP_Dec2022_KeyResults.pdf, https://echeloninsights.com/in-the-news/december-2022-omnibus-2/, https://docs.cdn.yougov.com/b94ttrxy3v/econtoplines.pdf, https://docs.cdn.yougov.com/urts2xadfd/econTabReport.pdf, https://www.foxnews.com/politics/fox-news-poll-americans-show-little-enthusiasm-biden-trump-rematch-2024, https://poll.qu.edu/poll-release?releaseid=3863, https://www.usatoday.com/story/news/politics/2022/12/13/trump-support-gop-2024-presidential-race-poll/10882346002/, https://www.wsj.com/articles/ron-desantis-holds-early-lead-over-donald-trump-among-gop-primary-voters-wsj-poll-shows-11670989311?mod=hp_lead_pos5, https://www.cnn.com/2022/12/14/politics/biden-trump-2024-poll/index.html, https://docs.cdn.yougov.com/dshfq4wqyr/econtoplines.pdf, https://docs.cdn.yougov.com/qsanp37uhh/econTabReport.pdf, https://www.yahoo.com/news/poll-trump-loses-ground-with-republicans-after-kanye-west-nick-fuentes-dinner-100204385.html, https://docs.cdn.yougov.com/regijt79ge/November_Georgia_Ads_joined_earlier.pdf, https://docs.cdn.yougov.com/xcvx6iu6em/November_Georgia_Ads_joined.pdf, https://news.gallup.com/poll/406892/party-images-stable-midterm-elections.aspx, https://docs.cdn.yougov.com/2zebcvjoec/econtoplines.pdf'</li></ul> |
| City Name | <ul><li>'Incident.Location.City: Shelton, Aloha, Wichita, San Francisco, Evans, Guthrie, Chandler, Assaria, Burlington, Knoxville, Stockton, Freeport, Columbus, Des Moines, New Orleans, Huntley, Salt Lake City, Strong, Syracuse, England'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6705 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("quantisan/bge-small-en-v1.5-93dataset")
# Run inference
preds = model("variety: Western, Eastern")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 24.0542 | 111 |
| Label | Training Sample Count |
|:------------------------|:----------------------|
| Categorical | 8 |
| Timestamp | 5 |
| Date | 8 |
| Integer | 8 |
| Partial timestamp | 4 |
| Short text | 8 |
| Very short text | 3 |
| AM/PM | 1 |
| Boolean | 8 |
| City Name | 1 |
| Color | 3 |
| Company Name | 1 |
| Country ISO Code | 2 |
| Country Name | 8 |
| Currency Code | 1 |
| Day of Month | 4 |
| Day of Week | 4 |
| First Name | 1 |
| Floating Point Number | 8 |
| Full Name | 8 |
| Last Name | 2 |
| Latitude | 4 |
| License Plate | 1 |
| Longitude | 4 |
| Month Name | 6 |
| Month Number | 4 |
| Occupation | 3 |
| Postal Code | 1 |
| Secondary Address | 1 |
| Slug | 8 |
| Street Address | 3 |
| Street Name | 3 |
| Time | 3 |
| U.S. State | 8 |
| U.S. State Abbreviation | 6 |
| URI | 1 |
| URL | 8 |
| Year | 8 |
| Zip Code | 4 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.1131 | - |
| 0.0100 | 50 | 0.2113 | - |
| 0.0200 | 100 | 0.1906 | - |
| 0.0301 | 150 | 0.1843 | - |
| 0.0401 | 200 | 0.161 | - |
| 0.0501 | 250 | 0.1418 | - |
| 0.0601 | 300 | 0.131 | - |
| 0.0701 | 350 | 0.1224 | - |
| 0.0802 | 400 | 0.114 | - |
| 0.0902 | 450 | 0.1039 | - |
| 0.1002 | 500 | 0.0825 | - |
| 0.1102 | 550 | 0.0838 | - |
| 0.1202 | 600 | 0.0745 | - |
| 0.1303 | 650 | 0.0705 | - |
| 0.1403 | 700 | 0.0586 | - |
| 0.1503 | 750 | 0.0552 | - |
| 0.1603 | 800 | 0.0567 | - |
| 0.1703 | 850 | 0.0553 | - |
| 0.1804 | 900 | 0.0456 | - |
| 0.1904 | 950 | 0.0443 | - |
| 0.2004 | 1000 | 0.0422 | - |
| 0.2104 | 1050 | 0.0363 | - |
| 0.2204 | 1100 | 0.0397 | - |
| 0.2305 | 1150 | 0.0361 | - |
| 0.2405 | 1200 | 0.0269 | - |
| 0.2505 | 1250 | 0.0331 | - |
| 0.2605 | 1300 | 0.0278 | - |
| 0.2705 | 1350 | 0.0297 | - |
| 0.2806 | 1400 | 0.022 | - |
| 0.2906 | 1450 | 0.0265 | - |
| 0.3006 | 1500 | 0.0344 | - |
| 0.3106 | 1550 | 0.0218 | - |
| 0.3206 | 1600 | 0.0275 | - |
| 0.3307 | 1650 | 0.0275 | - |
| 0.3407 | 1700 | 0.0207 | - |
| 0.3507 | 1750 | 0.0156 | - |
| 0.3607 | 1800 | 0.0246 | - |
| 0.3707 | 1850 | 0.0154 | - |
| 0.3808 | 1900 | 0.0117 | - |
| 0.3908 | 1950 | 0.0201 | - |
| 0.4008 | 2000 | 0.0153 | - |
| 0.4108 | 2050 | 0.018 | - |
| 0.4208 | 2100 | 0.017 | - |
| 0.4309 | 2150 | 0.011 | - |
| 0.4409 | 2200 | 0.0158 | - |
| 0.4509 | 2250 | 0.015 | - |
| 0.4609 | 2300 | 0.0109 | - |
| 0.4709 | 2350 | 0.0151 | - |
| 0.4810 | 2400 | 0.0085 | - |
| 0.4910 | 2450 | 0.0121 | - |
| 0.5010 | 2500 | 0.0118 | - |
| 0.5110 | 2550 | 0.0083 | - |
| 0.5210 | 2600 | 0.0094 | - |
| 0.5311 | 2650 | 0.0078 | - |
| 0.5411 | 2700 | 0.0123 | - |
| 0.5511 | 2750 | 0.0085 | - |
| 0.5611 | 2800 | 0.0046 | - |
| 0.5711 | 2850 | 0.0081 | - |
| 0.5812 | 2900 | 0.0085 | - |
| 0.5912 | 2950 | 0.0064 | - |
| 0.6012 | 3000 | 0.0113 | - |
| 0.6112 | 3050 | 0.0087 | - |
| 0.6212 | 3100 | 0.0071 | - |
| 0.6313 | 3150 | 0.01 | - |
| 0.6413 | 3200 | 0.0093 | - |
| 0.6513 | 3250 | 0.0056 | - |
| 0.6613 | 3300 | 0.007 | - |
| 0.6713 | 3350 | 0.0076 | - |
| 0.6814 | 3400 | 0.0077 | - |
| 0.6914 | 3450 | 0.0038 | - |
| 0.7014 | 3500 | 0.0051 | - |
| 0.7114 | 3550 | 0.0063 | - |
| 0.7214 | 3600 | 0.004 | - |
| 0.7315 | 3650 | 0.0036 | - |
| 0.7415 | 3700 | 0.0043 | - |
| 0.7515 | 3750 | 0.0086 | - |
| 0.7615 | 3800 | 0.0051 | - |
| 0.7715 | 3850 | 0.0056 | - |
| 0.7816 | 3900 | 0.0042 | - |
| 0.7916 | 3950 | 0.0062 | - |
| 0.8016 | 4000 | 0.0058 | - |
| 0.8116 | 4050 | 0.0034 | - |
| 0.8216 | 4100 | 0.0062 | - |
| 0.8317 | 4150 | 0.0091 | - |
| 0.8417 | 4200 | 0.0056 | - |
| 0.8517 | 4250 | 0.0039 | - |
| 0.8617 | 4300 | 0.0072 | - |
| 0.8717 | 4350 | 0.0051 | - |
| 0.8818 | 4400 | 0.0025 | - |
| 0.8918 | 4450 | 0.0051 | - |
| 0.9018 | 4500 | 0.0049 | - |
| 0.9118 | 4550 | 0.0024 | - |
| 0.9218 | 4600 | 0.0026 | - |
| 0.9319 | 4650 | 0.0046 | - |
| 0.9419 | 4700 | 0.0024 | - |
| 0.9519 | 4750 | 0.0026 | - |
| 0.9619 | 4800 | 0.0045 | - |
| 0.9719 | 4850 | 0.0022 | - |
| 0.9820 | 4900 | 0.0042 | - |
| 0.9920 | 4950 | 0.0067 | - |
| 1.0 | 4990 | - | 0.0996 |
| 1.0020 | 5000 | 0.0044 | - |
| 1.0120 | 5050 | 0.0023 | - |
| 1.0220 | 5100 | 0.0025 | - |
| 1.0321 | 5150 | 0.004 | - |
| 1.0421 | 5200 | 0.002 | - |
| 1.0521 | 5250 | 0.0042 | - |
| 1.0621 | 5300 | 0.0028 | - |
| 1.0721 | 5350 | 0.006 | - |
| 1.0822 | 5400 | 0.0043 | - |
| 1.0922 | 5450 | 0.0065 | - |
| 1.1022 | 5500 | 0.0042 | - |
| 1.1122 | 5550 | 0.004 | - |
| 1.1222 | 5600 | 0.0045 | - |
| 1.1323 | 5650 | 0.0049 | - |
| 1.1423 | 5700 | 0.0042 | - |
| 1.1523 | 5750 | 0.0044 | - |
| 1.1623 | 5800 | 0.002 | - |
| 1.1723 | 5850 | 0.0037 | - |
| 1.1824 | 5900 | 0.0038 | - |
| 1.1924 | 5950 | 0.0071 | - |
| 1.2024 | 6000 | 0.0044 | - |
| 1.2124 | 6050 | 0.0031 | - |
| 1.2224 | 6100 | 0.0021 | - |
| 1.2325 | 6150 | 0.0019 | - |
| 1.2425 | 6200 | 0.002 | - |
| 1.2525 | 6250 | 0.0059 | - |
| 1.2625 | 6300 | 0.002 | - |
| 1.2725 | 6350 | 0.0036 | - |
| 1.2826 | 6400 | 0.0019 | - |
| 1.2926 | 6450 | 0.0041 | - |
| 1.3026 | 6500 | 0.0042 | - |
| 1.3126 | 6550 | 0.0062 | - |
| 1.3226 | 6600 | 0.002 | - |
| 1.3327 | 6650 | 0.0016 | - |
| 1.3427 | 6700 | 0.0019 | - |
| 1.3527 | 6750 | 0.0055 | - |
| 1.3627 | 6800 | 0.0042 | - |
| 1.3727 | 6850 | 0.0023 | - |
| 1.3828 | 6900 | 0.0018 | - |
| 1.3928 | 6950 | 0.0041 | - |
| 1.4028 | 7000 | 0.008 | - |
| 1.4128 | 7050 | 0.0021 | - |
| 1.4228 | 7100 | 0.0017 | - |
| 1.4329 | 7150 | 0.0021 | - |
| 1.4429 | 7200 | 0.0017 | - |
| 1.4529 | 7250 | 0.0035 | - |
| 1.4629 | 7300 | 0.002 | - |
| 1.4729 | 7350 | 0.0016 | - |
| 1.4830 | 7400 | 0.0014 | - |
| 1.4930 | 7450 | 0.0041 | - |
| 1.5030 | 7500 | 0.0053 | - |
| 1.5130 | 7550 | 0.0026 | - |
| 1.5230 | 7600 | 0.002 | - |
| 1.5331 | 7650 | 0.0017 | - |
| 1.5431 | 7700 | 0.0017 | - |
| 1.5531 | 7750 | 0.0016 | - |
| 1.5631 | 7800 | 0.0021 | - |
| 1.5731 | 7850 | 0.0039 | - |
| 1.5832 | 7900 | 0.0034 | - |
| 1.5932 | 7950 | 0.0061 | - |
| 1.6032 | 8000 | 0.0025 | - |
| 1.6132 | 8050 | 0.002 | - |
| 1.6232 | 8100 | 0.0017 | - |
| 1.6333 | 8150 | 0.0016 | - |
| 1.6433 | 8200 | 0.0015 | - |
| 1.6533 | 8250 | 0.0037 | - |
| 1.6633 | 8300 | 0.0015 | - |
| 1.6733 | 8350 | 0.0035 | - |
| 1.6834 | 8400 | 0.0023 | - |
| 1.6934 | 8450 | 0.0051 | - |
| 1.7034 | 8500 | 0.0041 | - |
| 1.7134 | 8550 | 0.0018 | - |
| 1.7234 | 8600 | 0.0016 | - |
| 1.7335 | 8650 | 0.0016 | - |
| 1.7435 | 8700 | 0.0013 | - |
| 1.7535 | 8750 | 0.0022 | - |
| 1.7635 | 8800 | 0.0015 | - |
| 1.7735 | 8850 | 0.0017 | - |
| 1.7836 | 8900 | 0.0035 | - |
| 1.7936 | 8950 | 0.0013 | - |
| 1.8036 | 9000 | 0.0015 | - |
| 1.8136 | 9050 | 0.0034 | - |
| 1.8236 | 9100 | 0.0013 | - |
| 1.8337 | 9150 | 0.0037 | - |
| 1.8437 | 9200 | 0.0037 | - |
| 1.8537 | 9250 | 0.0014 | - |
| 1.8637 | 9300 | 0.0014 | - |
| 1.8737 | 9350 | 0.0013 | - |
| 1.8838 | 9400 | 0.0016 | - |
| 1.8938 | 9450 | 0.0013 | - |
| 1.9038 | 9500 | 0.0038 | - |
| 1.9138 | 9550 | 0.0013 | - |
| 1.9238 | 9600 | 0.0038 | - |
| 1.9339 | 9650 | 0.0013 | - |
| 1.9439 | 9700 | 0.0012 | - |
| 1.9539 | 9750 | 0.0016 | - |
| 1.9639 | 9800 | 0.0053 | - |
| 1.9739 | 9850 | 0.0018 | - |
| 1.9840 | 9900 | 0.0036 | - |
| 1.9940 | 9950 | 0.0014 | - |
| 2.0 | 9980 | - | 0.1052 |
| 2.0040 | 10000 | 0.0012 | - |
| 2.0140 | 10050 | 0.0014 | - |
| 2.0240 | 10100 | 0.0013 | - |
| 2.0341 | 10150 | 0.0014 | - |
| 2.0441 | 10200 | 0.0012 | - |
| 2.0541 | 10250 | 0.0014 | - |
| 2.0641 | 10300 | 0.0013 | - |
| 2.0741 | 10350 | 0.0012 | - |
| 2.0842 | 10400 | 0.0013 | - |
| 2.0942 | 10450 | 0.0033 | - |
| 2.1042 | 10500 | 0.0013 | - |
| 2.1142 | 10550 | 0.003 | - |
| 2.1242 | 10600 | 0.0036 | - |
| 2.1343 | 10650 | 0.0013 | - |
| 2.1443 | 10700 | 0.0036 | - |
| 2.1543 | 10750 | 0.0037 | - |
| 2.1643 | 10800 | 0.0024 | - |
| 2.1743 | 10850 | 0.0038 | - |
| 2.1844 | 10900 | 0.0014 | - |
| 2.1944 | 10950 | 0.0012 | - |
| 2.2044 | 11000 | 0.0035 | - |
| 2.2144 | 11050 | 0.0015 | - |
| 2.2244 | 11100 | 0.0012 | - |
| 2.2345 | 11150 | 0.0012 | - |
| 2.2445 | 11200 | 0.0011 | - |
| 2.2545 | 11250 | 0.0035 | - |
| 2.2645 | 11300 | 0.0012 | - |
| 2.2745 | 11350 | 0.0011 | - |
| 2.2846 | 11400 | 0.0011 | - |
| 2.2946 | 11450 | 0.0011 | - |
| 2.3046 | 11500 | 0.0035 | - |
| 2.3146 | 11550 | 0.0012 | - |
| 2.3246 | 11600 | 0.0011 | - |
| 2.3347 | 11650 | 0.0011 | - |
| 2.3447 | 11700 | 0.0014 | - |
| 2.3547 | 11750 | 0.0011 | - |
| 2.3647 | 11800 | 0.0011 | - |
| 2.3747 | 11850 | 0.0012 | - |
| 2.3848 | 11900 | 0.0011 | - |
| 2.3948 | 11950 | 0.001 | - |
| 2.4048 | 12000 | 0.001 | - |
| 2.4148 | 12050 | 0.0011 | - |
| 2.4248 | 12100 | 0.0011 | - |
| 2.4349 | 12150 | 0.0011 | - |
| 2.4449 | 12200 | 0.001 | - |
| 2.4549 | 12250 | 0.0034 | - |
| 2.4649 | 12300 | 0.0011 | - |
| 2.4749 | 12350 | 0.0013 | - |
| 2.4850 | 12400 | 0.0012 | - |
| 2.4950 | 12450 | 0.0015 | - |
| 2.5050 | 12500 | 0.0011 | - |
| 2.5150 | 12550 | 0.0034 | - |
| 2.5251 | 12600 | 0.001 | - |
| 2.5351 | 12650 | 0.0011 | - |
| 2.5451 | 12700 | 0.0011 | - |
| 2.5551 | 12750 | 0.001 | - |
| 2.5651 | 12800 | 0.001 | - |
| 2.5752 | 12850 | 0.0034 | - |
| 2.5852 | 12900 | 0.0033 | - |
| 2.5952 | 12950 | 0.0011 | - |
| 2.6052 | 13000 | 0.001 | - |
| 2.6152 | 13050 | 0.001 | - |
| 2.6253 | 13100 | 0.0012 | - |
| 2.6353 | 13150 | 0.0011 | - |
| 2.6453 | 13200 | 0.0033 | - |
| 2.6553 | 13250 | 0.0034 | - |
| 2.6653 | 13300 | 0.001 | - |
| 2.6754 | 13350 | 0.001 | - |
| 2.6854 | 13400 | 0.0034 | - |
| 2.6954 | 13450 | 0.001 | - |
| 2.7054 | 13500 | 0.001 | - |
| 2.7154 | 13550 | 0.001 | - |
| 2.7255 | 13600 | 0.0009 | - |
| 2.7355 | 13650 | 0.001 | - |
| 2.7455 | 13700 | 0.001 | - |
| 2.7555 | 13750 | 0.0009 | - |
| 2.7655 | 13800 | 0.001 | - |
| 2.7756 | 13850 | 0.0009 | - |
| 2.7856 | 13900 | 0.0031 | - |
| 2.7956 | 13950 | 0.001 | - |
| 2.8056 | 14000 | 0.0031 | - |
| 2.8156 | 14050 | 0.0033 | - |
| 2.8257 | 14100 | 0.001 | - |
| 2.8357 | 14150 | 0.0009 | - |
| 2.8457 | 14200 | 0.0009 | - |
| 2.8557 | 14250 | 0.0009 | - |
| 2.8657 | 14300 | 0.001 | - |
| 2.8758 | 14350 | 0.001 | - |
| 2.8858 | 14400 | 0.0033 | - |
| 2.8958 | 14450 | 0.001 | - |
| 2.9058 | 14500 | 0.001 | - |
| 2.9158 | 14550 | 0.001 | - |
| 2.9259 | 14600 | 0.0033 | - |
| 2.9359 | 14650 | 0.001 | - |
| 2.9459 | 14700 | 0.0009 | - |
| 2.9559 | 14750 | 0.001 | - |
| 2.9659 | 14800 | 0.001 | - |
| 2.9760 | 14850 | 0.0009 | - |
| 2.9860 | 14900 | 0.0009 | - |
| 2.9960 | 14950 | 0.0009 | - |
| 3.0 | 14970 | - | 0.1077 |
| 3.0060 | 15000 | 0.0033 | - |
| 3.0160 | 15050 | 0.0009 | - |
| 3.0261 | 15100 | 0.0009 | - |
| 3.0361 | 15150 | 0.0009 | - |
| 3.0461 | 15200 | 0.0009 | - |
| 3.0561 | 15250 | 0.0008 | - |
| 3.0661 | 15300 | 0.001 | - |
| 3.0762 | 15350 | 0.0009 | - |
| 3.0862 | 15400 | 0.0009 | - |
| 3.0962 | 15450 | 0.0032 | - |
| 3.1062 | 15500 | 0.0009 | - |
| 3.1162 | 15550 | 0.0009 | - |
| 3.1263 | 15600 | 0.0009 | - |
| 3.1363 | 15650 | 0.0009 | - |
| 3.1463 | 15700 | 0.0008 | - |
| 3.1563 | 15750 | 0.0009 | - |
| 3.1663 | 15800 | 0.0009 | - |
| 3.1764 | 15850 | 0.0008 | - |
| 3.1864 | 15900 | 0.0008 | - |
| 3.1964 | 15950 | 0.0009 | - |
| 3.2064 | 16000 | 0.0009 | - |
| 3.2164 | 16050 | 0.0033 | - |
| 3.2265 | 16100 | 0.0031 | - |
| 3.2365 | 16150 | 0.0008 | - |
| 3.2465 | 16200 | 0.0008 | - |
| 3.2565 | 16250 | 0.0008 | - |
| 3.2665 | 16300 | 0.0008 | - |
| 3.2766 | 16350 | 0.0008 | - |
| 3.2866 | 16400 | 0.0008 | - |
| 3.2966 | 16450 | 0.0008 | - |
| 3.3066 | 16500 | 0.0009 | - |
| 3.3166 | 16550 | 0.0008 | - |
| 3.3267 | 16600 | 0.0032 | - |
| 3.3367 | 16650 | 0.0008 | - |
| 3.3467 | 16700 | 0.0008 | - |
| 3.3567 | 16750 | 0.0009 | - |
| 3.3667 | 16800 | 0.0031 | - |
| 3.3768 | 16850 | 0.0009 | - |
| 3.3868 | 16900 | 0.0008 | - |
| 3.3968 | 16950 | 0.0009 | - |
| 3.4068 | 17000 | 0.0009 | - |
| 3.4168 | 17050 | 0.0008 | - |
| 3.4269 | 17100 | 0.0009 | - |
| 3.4369 | 17150 | 0.0031 | - |
| 3.4469 | 17200 | 0.0032 | - |
| 3.4569 | 17250 | 0.0008 | - |
| 3.4669 | 17300 | 0.0008 | - |
| 3.4770 | 17350 | 0.0008 | - |
| 3.4870 | 17400 | 0.0008 | - |
| 3.4970 | 17450 | 0.0057 | - |
| 3.5070 | 17500 | 0.0032 | - |
| 3.5170 | 17550 | 0.0009 | - |
| 3.5271 | 17600 | 0.0052 | - |
| 3.5371 | 17650 | 0.0008 | - |
| 3.5471 | 17700 | 0.0009 | - |
| 3.5571 | 17750 | 0.0008 | - |
| 3.5671 | 17800 | 0.0008 | - |
| 3.5772 | 17850 | 0.0008 | - |
| 3.5872 | 17900 | 0.0008 | - |
| 3.5972 | 17950 | 0.0009 | - |
| 3.6072 | 18000 | 0.0032 | - |
| 3.6172 | 18050 | 0.0008 | - |
| 3.6273 | 18100 | 0.0008 | - |
| 3.6373 | 18150 | 0.0008 | - |
| 3.6473 | 18200 | 0.0008 | - |
| 3.6573 | 18250 | 0.0008 | - |
| 3.6673 | 18300 | 0.0008 | - |
| 3.6774 | 18350 | 0.0008 | - |
| 3.6874 | 18400 | 0.0008 | - |
| 3.6974 | 18450 | 0.0008 | - |
| 3.7074 | 18500 | 0.0008 | - |
| 3.7174 | 18550 | 0.0007 | - |
| 3.7275 | 18600 | 0.0008 | - |
| 3.7375 | 18650 | 0.0008 | - |
| 3.7475 | 18700 | 0.003 | - |
| 3.7575 | 18750 | 0.0008 | - |
| 3.7675 | 18800 | 0.0008 | - |
| 3.7776 | 18850 | 0.0008 | - |
| 3.7876 | 18900 | 0.0007 | - |
| 3.7976 | 18950 | 0.0008 | - |
| 3.8076 | 19000 | 0.0007 | - |
| 3.8176 | 19050 | 0.0007 | - |
| 3.8277 | 19100 | 0.0029 | - |
| 3.8377 | 19150 | 0.0007 | - |
| 3.8477 | 19200 | 0.0008 | - |
| 3.8577 | 19250 | 0.0031 | - |
| 3.8677 | 19300 | 0.0007 | - |
| 3.8778 | 19350 | 0.0007 | - |
| 3.8878 | 19400 | 0.0008 | - |
| 3.8978 | 19450 | 0.0008 | - |
| 3.9078 | 19500 | 0.0031 | - |
| 3.9178 | 19550 | 0.0008 | - |
| 3.9279 | 19600 | 0.0008 | - |
| 3.9379 | 19650 | 0.0007 | - |
| 3.9479 | 19700 | 0.0008 | - |
| 3.9579 | 19750 | 0.0008 | - |
| 3.9679 | 19800 | 0.0008 | - |
| 3.9780 | 19850 | 0.0008 | - |
| 3.9880 | 19900 | 0.0008 | - |
| 3.9980 | 19950 | 0.0007 | - |
| 4.0 | 19960 | - | 0.1050 |
### Framework Versions
- Python: 3.11.10
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.4.1+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-small-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "metrics.statistics.average letter per word: 4.83, 4.41, 4.65, 4.77, 4.56, 4.36, 4.94, 4.78, 4.68, 4.61, 4.92, 4.6, 4.59, 4.95, 4.88, 4.63, 4.9, 4.31, 4.81, 4.8"}, {"text": "company.relationship: founder, None, founder/chairman, Relation, relation, CEO, chairman, investor, founder and CEO, founder/CEO, owner, chairman of management committee, founder and chairman, Chairman and Chief Executive Officer, general director, executive chairman, Chairman/founder, founder, chairman, ceo, former chairman and CEO, relation and chairman"}, {"text": "variety: Western, Eastern"}, {"text": "Data.Cholesterol: 14, 8, 12, 5, 2, 3, 4, 11, 15, 29, 34, 0, 6, 10, 13, 16, 17, 7, 59, 20"}, {"text": "Date.Full: 8/26/1990, 3/24/1991, 3/31/1991, 4/7/1991, 4/14/1991, 4/21/1991, 4/28/1991, 5/5/1991, 5/12/1991, 5/19/1991, 5/26/1991, 6/2/1991, 6/9/1991, 6/16/1991, 6/23/1991, 6/30/1991, 7/7/1991, 7/14/1991, 7/21/1991, 7/28/1991"}], "inference": true, "model-index": [{"name": "SetFit with BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.6705461056401074, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,431 |
RichardErkhov/asafaya_-_kanarya-2b-4bits
|
RichardErkhov
| null |
[
"safetensors",
"gptj",
"4-bit",
"bitsandbytes",
"region:us"
] | 2024-10-11T16:35:11Z |
2024-10-11T16:36:07+00:00
| 4 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
kanarya-2b - bnb 4bits
- Model creator: https://huggingface.co/asafaya/
- Original model: https://huggingface.co/asafaya/kanarya-2b/
Original model description:
---
license: apache-2.0
datasets:
- oscar
- mc4
language:
- tr
pipeline_tag: text-generation
widget:
- text: "Benim adım Zeynep, ve en sevdiğim kitabın adı:"
example_title: "Benim adım Zeynep, ve en sevdiğim kitabın adı"
- text: "Bugünkü yemeğimiz"
example_title: "Bugünkü yemeğimiz"
---
# Kanarya-2B: Turkish Language Model
<img src="https://asafaya.me/images/kanarya.webp" alt="Kanarya Logo" style="width:600px;"/>
**Kanarya** is a pre-trained Turkish GPT-J 2B model. Released as part of [Turkish Data Depository](https://tdd.ai/) efforts, the Kanarya family has two versions (Kanarya-2B, Kanarya-0.7B). Kanarya-2B is the larger version and Kanarya-0.7B is the smaller version. Both models are trained on a large-scale Turkish text corpus, filtered from OSCAR and mC4 datasets. The training data is collected from various sources, including news, articles, and websites, to create a diverse and high-quality dataset. The models are trained using a JAX/Flax implementation of the [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax) architecture.
## Model Details
- Model Name: Kanarya-2B
- Model Size: 2,050M parameters
- Training Data: OSCAR, mC4
- Language: Turkish
- Layers: 24
- Hidden Size: 2560
- Number of Heads: 20
- Context Size: 2048
- Positional Embeddings: Rotary
- Vocabulary Size: 32,768
## Intended Use
This model is only pre-trained on Turkish text data and is intended to be fine-tuned on a wide range of Turkish NLP tasks. The model can be used for various Turkish NLP tasks, including text generation, translation, summarization, and other Turkish NLP tasks. This model is not intended to be used for any downstream tasks without fine-tuning.
## Limitations and Ethical Considerations
The model is trained on a diverse and high-quality Turkish text corpus, but it may still generate toxic, biased, or unethical content. It is highly recommended to use the model responsibly and make sure that the generated content is appropriate for the use case. Please use the model responsibly and report any issues.
## License: Apache 2.0
The model is licensed under the Apache 2.0 License. It is free to use for any purpose, including commercial use. We encourage users to contribute to the model and report any issues. However, the model is provided "as is" without warranty of any kind.
## Citation
If you use the model, please cite the following paper:
```bibtex
@inproceedings{safaya-etal-2022-mukayese,
title = "Mukayese: {T}urkish {NLP} Strikes Back",
author = "Safaya, Ali and
Kurtulu{\c{s}}, Emirhan and
Goktogan, Arda and
Yuret, Deniz",
editor = "Muresan, Smaranda and
Nakov, Preslav and
Villavicencio, Aline",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.69",
doi = "10.18653/v1/2022.findings-acl.69",
pages = "846--863",
}
```
## Acknowledgments
During this work, Ali Safaya was supported by [KUIS AI Center](https://ai.ku.edu.tr/) fellowship. Moreover, the pre-training of these models were performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center ([TRUBA](https://www.truba.gov.tr/index.php/en/main-page/) resources).
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
kanarya-2b - bnb 4bits
- Model creator: https://huggingface.co/asafaya/
- Original model: https://huggingface.co/asafaya/kanarya-2b/
Original model description:
---
license: apache-2.0
datasets:
- oscar
- mc4
language:
- tr
pipeline_tag: text-generation
widget:
- text: "Benim adım Zeynep, ve en sevdiğim kitabın adı:"
example_title: "Benim adım Zeynep, ve en sevdiğim kitabın adı"
- text: "Bugünkü yemeğimiz"
example_title: "Bugünkü yemeğimiz"
---
# Kanarya-2B: Turkish Language Model
<img src="https://asafaya.me/images/kanarya.webp" alt="Kanarya Logo" style="width:600px;"/>
**Kanarya** is a pre-trained Turkish GPT-J 2B model. Released as part of [Turkish Data Depository](https://tdd.ai/) efforts, the Kanarya family has two versions (Kanarya-2B, Kanarya-0.7B). Kanarya-2B is the larger version and Kanarya-0.7B is the smaller version. Both models are trained on a large-scale Turkish text corpus, filtered from OSCAR and mC4 datasets. The training data is collected from various sources, including news, articles, and websites, to create a diverse and high-quality dataset. The models are trained using a JAX/Flax implementation of the [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax) architecture.
## Model Details
- Model Name: Kanarya-2B
- Model Size: 2,050M parameters
- Training Data: OSCAR, mC4
- Language: Turkish
- Layers: 24
- Hidden Size: 2560
- Number of Heads: 20
- Context Size: 2048
- Positional Embeddings: Rotary
- Vocabulary Size: 32,768
## Intended Use
This model is only pre-trained on Turkish text data and is intended to be fine-tuned on a wide range of Turkish NLP tasks. The model can be used for various Turkish NLP tasks, including text generation, translation, summarization, and other Turkish NLP tasks. This model is not intended to be used for any downstream tasks without fine-tuning.
## Limitations and Ethical Considerations
The model is trained on a diverse and high-quality Turkish text corpus, but it may still generate toxic, biased, or unethical content. It is highly recommended to use the model responsibly and make sure that the generated content is appropriate for the use case. Please use the model responsibly and report any issues.
## License: Apache 2.0
The model is licensed under the Apache 2.0 License. It is free to use for any purpose, including commercial use. We encourage users to contribute to the model and report any issues. However, the model is provided "as is" without warranty of any kind.
## Citation
If you use the model, please cite the following paper:
```bibtex
@inproceedings{safaya-etal-2022-mukayese,
title = "Mukayese: {T}urkish {NLP} Strikes Back",
author = "Safaya, Ali and
Kurtulu{\c{s}}, Emirhan and
Goktogan, Arda and
Yuret, Deniz",
editor = "Muresan, Smaranda and
Nakov, Preslav and
Villavicencio, Aline",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.69",
doi = "10.18653/v1/2022.findings-acl.69",
pages = "846--863",
}
```
## Acknowledgments
During this work, Ali Safaya was supported by [KUIS AI Center](https://ai.ku.edu.tr/) fellowship. Moreover, the pre-training of these models were performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center ([TRUBA](https://www.truba.gov.tr/index.php/en/main-page/) resources).
|
{}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 45,432 |
Helsinki-NLP/opus-mt-fr-war
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"war",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:37:37+00:00
| 41 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-fr-war
* source languages: fr
* target languages: war
* OPUS readme: [fr-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-war/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.war | 33.7 | 0.538 |
| null |
Non_BioNLP
|
### opus-mt-fr-war
* source languages: fr
* target languages: war
* OPUS readme: [fr-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-war/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.war | 33.7 | 0.538 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 45,433 |
SharadAI/bart-cnn-sharad-summarization
|
SharadAI
|
text2text-generation
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-11T11:22:33Z |
2024-07-11T11:23:27+00:00
| 92 | 0 |
---
base_model: facebook/bart-large-cnn
datasets:
- samsum
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-sharad-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-sharad-summarization
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0887 | 1.0 | 37 | 0.1344 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-sharad-summarization
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0887 | 1.0 | 37 | 0.1344 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{"base_model": "facebook/bart-large-cnn", "datasets": ["samsum"], "license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-cnn-sharad-summarization", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,434 |
Tonic/paligemma-3b-pt-896
|
Tonic
| null |
[
"license:gemma",
"region:us"
] | 2024-05-14T18:32:00Z |
2024-05-14T18:52:54+00:00
| 0 | 1 |
---
license: gemma
---
# Model Details
**PaliGemma** model card
**Model page:** PaliGemma
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by PaLI-3 and based on open components such as the SigLIP vision model and the Gemma language model. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a Transformer decoder and a Vision Transformer image encoder, with a total of 3 billion params. The text decoder is initialized from Gemma-2B. The image encoder is initialized from SigLIP-So400m/14. PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
**Input:** Image and text string, such as a prompt to caption the image, or a question.
**Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
- WebLI: WebLI (Web Language Image) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc.
- CC3M-35L: Curated English image-alt_text pairs from webpages (Sharma et al., 2018). We used the Google Cloud Translation API to translate into 34 additional languages.
- VQ²A-CC3M-35L/VQG-CC3M-35L: A subset of VQ2A-CC3M (Changpinyo et al., 2022a), translated into the same additional 34 languages as CC3M-35L, using the Google Cloud Translation API.
- OpenImages: Detection and object-aware questions and answers (Piergiovanni et al. 2022) generated by handcrafted rules on the OpenImages dataset.
- WIT: Images and texts collected from Wikipedia (Srinivasan et al., 2021).
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma on clean data:
- Pornographic image filtering: This filter removes images deemed to be of pornographic nature.
- Text safety filtering: We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive.
- Text toxicity filtering: We further use the Perspective API to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic.
- Text personal information filtering: We filtered certain personal information and other sensitive data using Cloud Data Loss Prevention (DLP) API to protect the privacy of individuals. Identifiers such as social security numbers and other sensitive information types were removed.
- Additional methods: Filtering based on content quality and safety in line with our policies and practices.
### Implementation information
#### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e).
#### Software
Training was done using JAX, Flax, TFDS and big_vision.
JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the big_vision GitHub repository.
### Evaluation information
#### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data.
Ethics and safety
Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
Human evaluation on prompts covering child safety, content safety and representational harms. See the Gemma model card for more details on evaluation approach, but with image captioning and visual question answering setups.
Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset (Karkkainen et al., 2021).
Evaluation results
The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting internal policies for categories such as child safety, content safety and representational harms.
On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes.
### Usage and limitations
#### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
Fine-tune on specific vision-language task:
The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation.
The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities.
The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks.
Vision-language research:
The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
### Bias and Fairness
VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
Misinformation and Misuse
VLMs can be misused to generate text that is false, misleading, or harmful.
Guidelines are provided for responsible use with the model, see the Responsible Generative AI Toolkit.
Transparency and Accountability
This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases.
Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases.
Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the Gemma Prohibited Use Policy.
Privacy violations: Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
Most limitations inherited from the underlying Gemma model still apply:
VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging.
Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language.
VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements.
VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations.
PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that.
PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input.
| null |
Non_BioNLP
|
# Model Details
**PaliGemma** model card
**Model page:** PaliGemma
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by PaLI-3 and based on open components such as the SigLIP vision model and the Gemma language model. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a Transformer decoder and a Vision Transformer image encoder, with a total of 3 billion params. The text decoder is initialized from Gemma-2B. The image encoder is initialized from SigLIP-So400m/14. PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
**Input:** Image and text string, such as a prompt to caption the image, or a question.
**Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
- WebLI: WebLI (Web Language Image) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc.
- CC3M-35L: Curated English image-alt_text pairs from webpages (Sharma et al., 2018). We used the Google Cloud Translation API to translate into 34 additional languages.
- VQ²A-CC3M-35L/VQG-CC3M-35L: A subset of VQ2A-CC3M (Changpinyo et al., 2022a), translated into the same additional 34 languages as CC3M-35L, using the Google Cloud Translation API.
- OpenImages: Detection and object-aware questions and answers (Piergiovanni et al. 2022) generated by handcrafted rules on the OpenImages dataset.
- WIT: Images and texts collected from Wikipedia (Srinivasan et al., 2021).
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma on clean data:
- Pornographic image filtering: This filter removes images deemed to be of pornographic nature.
- Text safety filtering: We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive.
- Text toxicity filtering: We further use the Perspective API to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic.
- Text personal information filtering: We filtered certain personal information and other sensitive data using Cloud Data Loss Prevention (DLP) API to protect the privacy of individuals. Identifiers such as social security numbers and other sensitive information types were removed.
- Additional methods: Filtering based on content quality and safety in line with our policies and practices.
### Implementation information
#### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e).
#### Software
Training was done using JAX, Flax, TFDS and big_vision.
JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the big_vision GitHub repository.
### Evaluation information
#### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data.
Ethics and safety
Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
Human evaluation on prompts covering child safety, content safety and representational harms. See the Gemma model card for more details on evaluation approach, but with image captioning and visual question answering setups.
Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset (Karkkainen et al., 2021).
Evaluation results
The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting internal policies for categories such as child safety, content safety and representational harms.
On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes.
### Usage and limitations
#### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
Fine-tune on specific vision-language task:
The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation.
The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities.
The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks.
Vision-language research:
The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
### Bias and Fairness
VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
Misinformation and Misuse
VLMs can be misused to generate text that is false, misleading, or harmful.
Guidelines are provided for responsible use with the model, see the Responsible Generative AI Toolkit.
Transparency and Accountability
This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases.
Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases.
Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the Gemma Prohibited Use Policy.
Privacy violations: Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
Most limitations inherited from the underlying Gemma model still apply:
VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging.
Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language.
VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements.
VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations.
PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that.
PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input.
|
{"license": "gemma"}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 45,435 |
marumarukun/BAAI-bge-large-en-v1.5_fine_tuned_fold3_20241117_105432_boosting_1
|
marumarukun
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-17T08:46:08Z |
2024-11-17T08:46:42+00:00
| 4 | 0 |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("marumarukun/BAAI-bge-large-en-v1.5_fine_tuned_fold3_20241117_105432_boosting_1")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.12.7
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.4.0+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("marumarukun/BAAI-bge-large-en-v1.5_fine_tuned_fold3_20241117_105432_boosting_1")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.12.7
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.4.0+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,436 |
yiiino/deberta-v3-base-cola
|
yiiino
|
text-classification
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-04T01:11:48Z |
2022-12-04T03:03:39+00:00
| 15 | 0 |
---
datasets:
- glue
language:
- en
license: mit
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.6875144669936191
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-cola
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7360
- Matthews Correlation: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-cola
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7360
- Matthews Correlation: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
{"datasets": ["glue"], "language": ["en"], "license": "mit", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "deberta-v3-base-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.6875144669936191, "name": "Matthews Correlation"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,437 |
davelotito/donut_experiment_bayesian_trial_9
|
davelotito
|
image-text-to-text
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2024-06-26T13:14:04Z |
2024-06-26T13:56:22+00:00
| 4 | 0 |
---
base_model: naver-clova-ix/donut-base
license: mit
metrics:
- bleu
- wer
tags:
- generated_from_trainer
model-index:
- name: donut_experiment_bayesian_trial_9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_experiment_bayesian_trial_9
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5922
- Bleu: 0.0587
- Precisions: [0.735655737704918, 0.6310904872389791, 0.5668449197860963, 0.5173501577287066]
- Brevity Penalty: 0.0967
- Length Ratio: 0.2998
- Translation Length: 488
- Reference Length: 1628
- Cer: 0.7688
- Wer: 0.8559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.393489197537874e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 1.1982 | 1.0 | 253 | 0.7300 | 0.0555 | [0.6828282828282828, 0.5616438356164384, 0.5091863517060368, 0.45987654320987653] | 0.1014 | 0.3041 | 495 | 1628 | 0.7812 | 0.8659 |
| 0.5755 | 2.0 | 506 | 0.5922 | 0.0587 | [0.735655737704918, 0.6310904872389791, 0.5668449197860963, 0.5173501577287066] | 0.0967 | 0.2998 | 488 | 1628 | 0.7688 | 0.8559 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_experiment_bayesian_trial_9
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5922
- Bleu: 0.0587
- Precisions: [0.735655737704918, 0.6310904872389791, 0.5668449197860963, 0.5173501577287066]
- Brevity Penalty: 0.0967
- Length Ratio: 0.2998
- Translation Length: 488
- Reference Length: 1628
- Cer: 0.7688
- Wer: 0.8559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.393489197537874e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 1.1982 | 1.0 | 253 | 0.7300 | 0.0555 | [0.6828282828282828, 0.5616438356164384, 0.5091863517060368, 0.45987654320987653] | 0.1014 | 0.3041 | 495 | 1628 | 0.7812 | 0.8659 |
| 0.5755 | 2.0 | 506 | 0.5922 | 0.0587 | [0.735655737704918, 0.6310904872389791, 0.5668449197860963, 0.5173501577287066] | 0.0967 | 0.2998 | 488 | 1628 | 0.7688 | 0.8559 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.19.1
|
{"base_model": "naver-clova-ix/donut-base", "license": "mit", "metrics": ["bleu", "wer"], "tags": ["generated_from_trainer"], "model-index": [{"name": "donut_experiment_bayesian_trial_9", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,438 |
japanese-asr/en-cascaded-s2t-translation
|
japanese-asr
|
automatic-speech-recognition
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-09-25T15:56:33Z |
2024-09-28T09:57:55+00:00
| 21 | 1 |
---
library_name: transformers
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
---
# Cascaded English Speech2Text Translation
This is a pipeline for speech-to-text translation from English speech to any target language text based on the cascaded approach, that consists of ASR and translation.
The pipeline employs [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) for ASR (English speech -> English text)
and [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) for text translation.
The input must be English speech, while the translation can be in any languages NLLB trained on. Please find the all available languages and their language codes
[here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200).
**Model for Japanese speech translation is available at [ja-cascaded-s2t-translation](https://huggingface.co/japanese-asr/ja-cascaded-s2t-translation).**
## Benchmark
The folloiwng table shows CER computed over the reference and predicted translation for translating English speech to Japanese text task
(subsets of [CoVoST2 and Fleurs](https://huggingface.co/datasets/japanese-asr/en2ja.s2t_translation)) with different size of NLLB along with OpenAI Whisper models.
| model | [CoVoST2 (En->Ja)](https://huggingface.co/datasets/japanese-asr/en2ja.s2t_translation)| [Fleurs (En->JA)](https://huggingface.co/datasets/japanese-asr/en2ja.s2t_translation) |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------:|------------------------------------------------------------------------------------------------------:|
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B)) | 62.4 | 63.5 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B)) | 64.4 | 67.2 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-distilled-1.3B](https://huggingface.co/facebook/nllb-200-distilled-1.3B)) | 62.4 | 62.9 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M)) | 63.4 | 66.2 |
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 178.9 | 209.5 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 179.6 | 201.8 |
| [openai/whisper-large](https://huggingface.co/openai/whisper-large) | 178.7 | 201.8 |
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 178.7 | 202 |
| [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 178.9 | 206.8 |
| [openai/whisper-base](https://huggingface.co/openai/whisper-base) | 179.5 | 214.2 |
| [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 185.2 | 200.5 |
See [https://github.com/kotoba-tech/kotoba-whisper](https://github.com/kotoba-tech/kotoba-whisper) for the evaluation detail.
### Inference Speed
Due to the nature of cascaded approach, the pipeline has additional complexity compared to the single end2end OpenAI whisper models for the sake of high accuracy.
Following table shows the mean inference time in second averaged over 10 trials on audio sample with different durations.
| model | 10 | 30 | 60 | 300 |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------:|------:|------:|------:|
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B)) | 0.173 | 0.247 | 0.352 | 1.772 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B)) | 0.173 | 0.24 | 0.348 | 1.515 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-distilled-1.3B](https://huggingface.co/facebook/nllb-200-distilled-1.3B)) | 0.17 | 0.245 | 0.348 | 1.882 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M)) | 0.108 | 0.179 | 0.283 | 1.33 |
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 0.061 | 0.184 | 0.372 | 1.804 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 0.062 | 0.199 | 0.415 | 1.854 |
| [openai/whisper-large](https://huggingface.co/openai/whisper-large) | 0.062 | 0.183 | 0.363 | 1.899 |
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 0.045 | 0.132 | 0.266 | 1.368 |
| [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 0.135 | 0.376 | 0.631 | 3.495 |
| [openai/whisper-base](https://huggingface.co/openai/whisper-base) | 0.054 | 0.108 | 0.231 | 1.019 |
| [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 0.045 | 0.124 | 0.208 | 0.838 |
## Usage
Here is an example to translate English speech into Japanese text translation.
First, download a sample speech.
```bash
wget https://huggingface.co/datasets/japanese-asr/en_asr.esb_eval/resolve/main/sample.wav -O sample_en.wav
```
Then, run the pipeline as below.
```python3
from transformers import pipeline
# load model
pipe = pipeline(
model="japanese-asr/en-cascaded-s2t-translation",
model_translation="facebook/nllb-200-distilled-600M",
tgt_lang="jpn_Jpan",
model_kwargs={"attn_implementation": "sdpa"},
chunk_length_s=15,
trust_remote_code=True,
)
# translate
output = pipe("./sample.wav")
```
Other NLLB models can be used by setting `model_translation` such as following.
- [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B)
- [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M)
- [facebook/nllb-200-distilled-1.3B](https://huggingface.co/facebook/nllb-200-distilled-1.3B)
- [facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B)
| null |
Non_BioNLP
|
# Cascaded English Speech2Text Translation
This is a pipeline for speech-to-text translation from English speech to any target language text based on the cascaded approach, that consists of ASR and translation.
The pipeline employs [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) for ASR (English speech -> English text)
and [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) for text translation.
The input must be English speech, while the translation can be in any languages NLLB trained on. Please find the all available languages and their language codes
[here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200).
**Model for Japanese speech translation is available at [ja-cascaded-s2t-translation](https://huggingface.co/japanese-asr/ja-cascaded-s2t-translation).**
## Benchmark
The folloiwng table shows CER computed over the reference and predicted translation for translating English speech to Japanese text task
(subsets of [CoVoST2 and Fleurs](https://huggingface.co/datasets/japanese-asr/en2ja.s2t_translation)) with different size of NLLB along with OpenAI Whisper models.
| model | [CoVoST2 (En->Ja)](https://huggingface.co/datasets/japanese-asr/en2ja.s2t_translation)| [Fleurs (En->JA)](https://huggingface.co/datasets/japanese-asr/en2ja.s2t_translation) |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------:|------------------------------------------------------------------------------------------------------:|
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B)) | 62.4 | 63.5 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B)) | 64.4 | 67.2 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-distilled-1.3B](https://huggingface.co/facebook/nllb-200-distilled-1.3B)) | 62.4 | 62.9 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M)) | 63.4 | 66.2 |
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 178.9 | 209.5 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 179.6 | 201.8 |
| [openai/whisper-large](https://huggingface.co/openai/whisper-large) | 178.7 | 201.8 |
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 178.7 | 202 |
| [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 178.9 | 206.8 |
| [openai/whisper-base](https://huggingface.co/openai/whisper-base) | 179.5 | 214.2 |
| [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 185.2 | 200.5 |
See [https://github.com/kotoba-tech/kotoba-whisper](https://github.com/kotoba-tech/kotoba-whisper) for the evaluation detail.
### Inference Speed
Due to the nature of cascaded approach, the pipeline has additional complexity compared to the single end2end OpenAI whisper models for the sake of high accuracy.
Following table shows the mean inference time in second averaged over 10 trials on audio sample with different durations.
| model | 10 | 30 | 60 | 300 |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------:|------:|------:|------:|
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B)) | 0.173 | 0.247 | 0.352 | 1.772 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B)) | 0.173 | 0.24 | 0.348 | 1.515 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-distilled-1.3B](https://huggingface.co/facebook/nllb-200-distilled-1.3B)) | 0.17 | 0.245 | 0.348 | 1.882 |
| [japanese-asr/en-cascaded-s2t-translation](https://huggingface.co/japanese-asr/en-cascaded-s2t-translation) ([facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M)) | 0.108 | 0.179 | 0.283 | 1.33 |
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 0.061 | 0.184 | 0.372 | 1.804 |
| [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 0.062 | 0.199 | 0.415 | 1.854 |
| [openai/whisper-large](https://huggingface.co/openai/whisper-large) | 0.062 | 0.183 | 0.363 | 1.899 |
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 0.045 | 0.132 | 0.266 | 1.368 |
| [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 0.135 | 0.376 | 0.631 | 3.495 |
| [openai/whisper-base](https://huggingface.co/openai/whisper-base) | 0.054 | 0.108 | 0.231 | 1.019 |
| [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 0.045 | 0.124 | 0.208 | 0.838 |
## Usage
Here is an example to translate English speech into Japanese text translation.
First, download a sample speech.
```bash
wget https://huggingface.co/datasets/japanese-asr/en_asr.esb_eval/resolve/main/sample.wav -O sample_en.wav
```
Then, run the pipeline as below.
```python3
from transformers import pipeline
# load model
pipe = pipeline(
model="japanese-asr/en-cascaded-s2t-translation",
model_translation="facebook/nllb-200-distilled-600M",
tgt_lang="jpn_Jpan",
model_kwargs={"attn_implementation": "sdpa"},
chunk_length_s=15,
trust_remote_code=True,
)
# translate
output = pipe("./sample.wav")
```
Other NLLB models can be used by setting `model_translation` such as following.
- [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B)
- [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M)
- [facebook/nllb-200-distilled-1.3B](https://huggingface.co/facebook/nllb-200-distilled-1.3B)
- [facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B)
|
{"library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "automatic-speech-recognition", "tags": ["audio"]}
|
task
|
[
"TRANSLATION"
] | 45,439 |
emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained
|
emre
|
text-generation
|
[
"peft",
"safetensors",
"continue_pre_train",
"text-generation",
"tr",
"dataset:Alaeddin/wikipedia-turkish",
"base_model:google/gemma-2-9b",
"base_model:adapter:google/gemma-2-9b",
"license:gemma",
"region:us"
] | 2025-03-15T21:43:23Z |
2025-03-15T23:55:06+00:00
| 32 | 2 |
---
base_model:
- google/gemma-2-9b
datasets:
- Alaeddin/wikipedia-turkish
language:
- tr
library_name: peft
license: gemma
metrics:
- perplexity
pipeline_tag: text-generation
tags:
- continue_pre_train
---
# emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained
## Model Details
- **Commercially only available to BBVA Group, Free to use for academic and research purposes**
### Model Description
This model is a continued pre-trained version of the `google/gemma-2-9b` base model, trained on the Turkish Wikipedia dataset (`Alaeddin/wikipedia-turkish`). The fine-tuning was performed using Low-Rank Adaptation (LoRA) to efficiently adapt the model to the Turkish language. This model aims to improve the base model's understanding and generation capabilities for Turkish text.
- **Developed by:** Emre Tasar, PhDc University of Navarra / Data Scientist
- **Funded by:** Self-funded research project. (820 Google Colab Calculation Hours Fee)
- **Shared by:** Emre Tasar ([https://huggingface.co/emre](https://huggingface.co/emre))
- **Model type:** Causal Language Model
- **Language(s) (NLP):** Turkish (tr)
- **License:** Gemma, BBVA Only, Free for academy
- **Continue Pre Trained from model:** [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b)
### Model Sources
- **Repository:** [https://huggingface.co/emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained](https://huggingface.co/emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained)
## Uses
### Direct Use
This model can be used for generating Turkish text for various natural language processing tasks, such as:
- Text generation
- Language modeling
- Creative writing
- Answering questions based on Turkish text (with appropriate prompting)
It is intended for researchers, developers, and enthusiasts interested in exploring and utilizing large language models for the Turkish language.
### Downstream Use
This model can serve as a strong base for further fine-tuning on specific downstream tasks in Turkish, such as:
- Turkish text summarization
- Turkish question answering
- Turkish text classification
- Turkish dialogue generation
### Out-of-Scope Use
This model should not be used for generating harmful, unethical, or biased content. As a language model trained on a large corpus of text, it may inadvertently generate such content. Users should exercise caution and responsibility when deploying this model.
## Bias, Risks, and Limitations
The model was trained on the Turkish Wikipedia dataset, which may contain biases present in the original data. The model's performance may vary depending on the specific task and domain. Users should be aware of these limitations and conduct thorough evaluations for their specific use cases.
### Recommendations
Users should carefully evaluate the model's output and consider potential biases before deploying it in real-world applications. Further fine-tuning on task-specific and diverse Turkish datasets can help mitigate some of these limitations.
## Training Details
### Training Data
The model was trained on the [Alaeddin/wikipedia-turkish](https://huggingface.co/datasets/Alaeddin/wikipedia-turkish) dataset:
- **Training Split:** 1,620,000 paragraphs.
- **Validation Split:** 1,000 paragraphs (disjoint from the training set).
### Training Procedure
The model was trained using the Hugging Face `Trainer` API on a Google Colab Pro+ instance with an A100 GPU (40GB). Key settings include:
- **Quantization:** 4-bit with NF4 type and double quantization (`BitsAndBytesConfig`).
- **LoRA Configuration:**
- Rank (`r`): 8
- Alpha (`lora_alpha`): 32
- Target Modules: `q_proj`, `v_proj`
- Dropout: 0.1
- **Training Arguments:**
- Epochs: 1
- Effective Batch Size: 8 (`per_device_train_batch_size=2`, `gradient_accumulation_steps=4`)
- Learning Rate: 2e-5
- Scheduler: Linear with 500 warmup steps
- Mixed Precision: FP16
- Evaluation Frequency: Every 5,000 steps
- Total Steps: 202,500
#### Training Hyperparameters
- **Training regime:** FP16 mixed precision
- **Optimizer:** AdamW (fused implementation)
#### Speeds, Sizes, Times
- **Duration:** Approximately 110 hours
- **Hardware:** A100 GPU (40GB)
- **Trainable Parameters:** 4,472,832 (0.0484% of total 9,246,178,816 parameters)
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
1,000 paragraphs from the Turkish Wikipedia dataset, reserved as a validation set.
#### Metrics
- **Validation Loss:** Measures the model's prediction error on the validation set.
- **Perplexity:** Indicates how well the model predicts the next token (lower is better).
### Results
| Model | Validation Loss | Perplexity |
|-------------------------------|-----------------|------------|
| Pre-trained Core (Gemma-2-9b) | 2.5168 | 12.39 |
| Continued Pre-trained (LoRA) | 2.1027 | 8.19 |
The LoRA-adapted model significantly outperforms the base model on Turkish text.
## Environmental Impact
Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute):
- **Hardware Type:** A100 GPU (40GB)
- **Hours Used:** 110 hours
- **Cloud Provider:** Google Colab
- **Compute Region:** Unknown (assumed us-central1 for estimation)
- **Carbon Emitted:** ~22 kg CO2eq (based on 44 kWh at 0.5 kg CO2/kWh)
Note: Exact emissions depend on the compute region's energy mix.
## How to Get Started with the Model
You can easily load and use this model using the `transformers` and `peft` libraries in Python:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
model_name = "google/gemma-2-9b"
peft_model_id = "emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load the base model with 4-bit quantization for efficiency
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16
)
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto",
torch_dtype=torch.float16,
attn_implementation="eager"
)
# Load the LoRA adapter
model = PeftModel.from_pretrained(base_model, peft_model_id)
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
prompt = "Türkiye'nin başkenti neresidir?"
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**input_ids, max_new_tokens=50, num_return_sequences=1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
```Python
@misc{tasar2023gemma2turkish,
author = {Davut Emre Tasar},
title = {Gemma-2-9b Turkish LoRA Continue Pre-Trained Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained}}
}
```
| null |
Non_BioNLP
|
# emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained
## Model Details
- **Commercially only available to BBVA Group, Free to use for academic and research purposes**
### Model Description
This model is a continued pre-trained version of the `google/gemma-2-9b` base model, trained on the Turkish Wikipedia dataset (`Alaeddin/wikipedia-turkish`). The fine-tuning was performed using Low-Rank Adaptation (LoRA) to efficiently adapt the model to the Turkish language. This model aims to improve the base model's understanding and generation capabilities for Turkish text.
- **Developed by:** Emre Tasar, PhDc University of Navarra / Data Scientist
- **Funded by:** Self-funded research project. (820 Google Colab Calculation Hours Fee)
- **Shared by:** Emre Tasar ([https://huggingface.co/emre](https://huggingface.co/emre))
- **Model type:** Causal Language Model
- **Language(s) (NLP):** Turkish (tr)
- **License:** Gemma, BBVA Only, Free for academy
- **Continue Pre Trained from model:** [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b)
### Model Sources
- **Repository:** [https://huggingface.co/emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained](https://huggingface.co/emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained)
## Uses
### Direct Use
This model can be used for generating Turkish text for various natural language processing tasks, such as:
- Text generation
- Language modeling
- Creative writing
- Answering questions based on Turkish text (with appropriate prompting)
It is intended for researchers, developers, and enthusiasts interested in exploring and utilizing large language models for the Turkish language.
### Downstream Use
This model can serve as a strong base for further fine-tuning on specific downstream tasks in Turkish, such as:
- Turkish text summarization
- Turkish question answering
- Turkish text classification
- Turkish dialogue generation
### Out-of-Scope Use
This model should not be used for generating harmful, unethical, or biased content. As a language model trained on a large corpus of text, it may inadvertently generate such content. Users should exercise caution and responsibility when deploying this model.
## Bias, Risks, and Limitations
The model was trained on the Turkish Wikipedia dataset, which may contain biases present in the original data. The model's performance may vary depending on the specific task and domain. Users should be aware of these limitations and conduct thorough evaluations for their specific use cases.
### Recommendations
Users should carefully evaluate the model's output and consider potential biases before deploying it in real-world applications. Further fine-tuning on task-specific and diverse Turkish datasets can help mitigate some of these limitations.
## Training Details
### Training Data
The model was trained on the [Alaeddin/wikipedia-turkish](https://huggingface.co/datasets/Alaeddin/wikipedia-turkish) dataset:
- **Training Split:** 1,620,000 paragraphs.
- **Validation Split:** 1,000 paragraphs (disjoint from the training set).
### Training Procedure
The model was trained using the Hugging Face `Trainer` API on a Google Colab Pro+ instance with an A100 GPU (40GB). Key settings include:
- **Quantization:** 4-bit with NF4 type and double quantization (`BitsAndBytesConfig`).
- **LoRA Configuration:**
- Rank (`r`): 8
- Alpha (`lora_alpha`): 32
- Target Modules: `q_proj`, `v_proj`
- Dropout: 0.1
- **Training Arguments:**
- Epochs: 1
- Effective Batch Size: 8 (`per_device_train_batch_size=2`, `gradient_accumulation_steps=4`)
- Learning Rate: 2e-5
- Scheduler: Linear with 500 warmup steps
- Mixed Precision: FP16
- Evaluation Frequency: Every 5,000 steps
- Total Steps: 202,500
#### Training Hyperparameters
- **Training regime:** FP16 mixed precision
- **Optimizer:** AdamW (fused implementation)
#### Speeds, Sizes, Times
- **Duration:** Approximately 110 hours
- **Hardware:** A100 GPU (40GB)
- **Trainable Parameters:** 4,472,832 (0.0484% of total 9,246,178,816 parameters)
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
1,000 paragraphs from the Turkish Wikipedia dataset, reserved as a validation set.
#### Metrics
- **Validation Loss:** Measures the model's prediction error on the validation set.
- **Perplexity:** Indicates how well the model predicts the next token (lower is better).
### Results
| Model | Validation Loss | Perplexity |
|-------------------------------|-----------------|------------|
| Pre-trained Core (Gemma-2-9b) | 2.5168 | 12.39 |
| Continued Pre-trained (LoRA) | 2.1027 | 8.19 |
The LoRA-adapted model significantly outperforms the base model on Turkish text.
## Environmental Impact
Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute):
- **Hardware Type:** A100 GPU (40GB)
- **Hours Used:** 110 hours
- **Cloud Provider:** Google Colab
- **Compute Region:** Unknown (assumed us-central1 for estimation)
- **Carbon Emitted:** ~22 kg CO2eq (based on 44 kWh at 0.5 kg CO2/kWh)
Note: Exact emissions depend on the compute region's energy mix.
## How to Get Started with the Model
You can easily load and use this model using the `transformers` and `peft` libraries in Python:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
model_name = "google/gemma-2-9b"
peft_model_id = "emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load the base model with 4-bit quantization for efficiency
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16
)
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto",
torch_dtype=torch.float16,
attn_implementation="eager"
)
# Load the LoRA adapter
model = PeftModel.from_pretrained(base_model, peft_model_id)
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
prompt = "Türkiye'nin başkenti neresidir?"
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**input_ids, max_new_tokens=50, num_return_sequences=1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
```Python
@misc{tasar2023gemma2turkish,
author = {Davut Emre Tasar},
title = {Gemma-2-9b Turkish LoRA Continue Pre-Trained Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/emre/gemma-2-9b-Turkish-Lora-Continue-Pre-Trained}}
}
```
|
{"base_model": ["google/gemma-2-9b"], "datasets": ["Alaeddin/wikipedia-turkish"], "language": ["tr"], "library_name": "peft", "license": "gemma", "metrics": ["perplexity"], "pipeline_tag": "text-generation", "tags": ["continue_pre_train"]}
|
task
|
[
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,440 |
machinelearningzuu/lesson-summarization
|
machinelearningzuu
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-05-23T14:22:26Z |
2023-08-16T02:32:09+00:00
| 20 | 0 |
---
base_model: t5-small
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: lesson-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lesson-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 2.9037 | 3.12 | 200 | 2.2456 |
| 2.5914 | 6.25 | 400 | 2.1498 |
| 2.393 | 9.38 | 600 | 2.1002 |
| 2.2409 | 12.5 | 800 | 2.0754 |
| 2.1515 | 15.62 | 1000 | 2.0683 |
| 2.0633 | 18.75 | 1200 | 2.0541 |
| 1.9418 | 21.88 | 1400 | 2.0603 |
| 1.837 | 25.0 | 1600 | 2.0788 |
| 1.7715 | 28.12 | 1800 | 2.0754 |
| 1.6957 | 31.25 | 2000 | 2.0815 |
| 1.6079 | 34.38 | 2200 | 2.0940 |
| 1.5947 | 37.5 | 2400 | 2.1094 |
| 1.4603 | 40.62 | 2600 | 2.1147 |
| 1.4621 | 43.75 | 2800 | 2.1354 |
| 1.4021 | 46.88 | 3000 | 2.1519 |
| 1.3394 | 50.0 | 3200 | 2.1670 |
| 1.2866 | 53.12 | 3400 | 2.1921 |
| 1.2681 | 56.25 | 3600 | 2.2045 |
| 1.1866 | 59.38 | 3800 | 2.2194 |
| 1.2098 | 62.5 | 4000 | 2.2302 |
| 1.1386 | 65.62 | 4200 | 2.2400 |
| 1.0853 | 68.75 | 4400 | 2.2634 |
| 1.0888 | 71.88 | 4600 | 2.2810 |
| 1.0408 | 75.0 | 4800 | 2.2909 |
| 1.0309 | 78.12 | 5000 | 2.3059 |
| 0.9523 | 81.25 | 5200 | 2.3249 |
| 0.9671 | 84.38 | 5400 | 2.3333 |
| 0.9413 | 87.5 | 5600 | 2.3543 |
| 0.9127 | 90.62 | 5800 | 2.3636 |
| 0.9095 | 93.75 | 6000 | 2.3676 |
| 0.8952 | 96.88 | 6200 | 2.3756 |
| 0.857 | 100.0 | 6400 | 2.3878 |
| 0.8474 | 103.12 | 6600 | 2.4148 |
| 0.8215 | 106.25 | 6800 | 2.4231 |
| 0.8172 | 109.38 | 7000 | 2.4243 |
| 0.7761 | 112.5 | 7200 | 2.4489 |
| 0.7737 | 115.62 | 7400 | 2.4718 |
| 0.7476 | 118.75 | 7600 | 2.4614 |
| 0.7345 | 121.88 | 7800 | 2.4705 |
| 0.7426 | 125.0 | 8000 | 2.4740 |
| 0.7151 | 128.12 | 8200 | 2.4833 |
| 0.7191 | 131.25 | 8400 | 2.4786 |
| 0.6818 | 134.38 | 8600 | 2.4882 |
| 0.6862 | 137.5 | 8800 | 2.4938 |
| 0.6929 | 140.62 | 9000 | 2.4977 |
| 0.6494 | 143.75 | 9200 | 2.5195 |
| 0.6689 | 146.88 | 9400 | 2.5185 |
| 0.6492 | 150.0 | 9600 | 2.5259 |
| 0.6384 | 153.12 | 9800 | 2.5259 |
| 0.6435 | 156.25 | 10000 | 2.5287 |
| 0.6251 | 159.38 | 10200 | 2.5284 |
| 0.6295 | 162.5 | 10400 | 2.5398 |
| 0.6324 | 165.62 | 10600 | 2.5442 |
| 0.6252 | 168.75 | 10800 | 2.5481 |
| 0.6108 | 171.88 | 11000 | 2.5455 |
| 0.6034 | 175.0 | 11200 | 2.5502 |
| 0.5969 | 178.12 | 11400 | 2.5601 |
| 0.5949 | 181.25 | 11600 | 2.5617 |
| 0.6183 | 184.38 | 11800 | 2.5679 |
| 0.5805 | 187.5 | 12000 | 2.5687 |
| 0.6032 | 190.62 | 12200 | 2.5708 |
| 0.5955 | 193.75 | 12400 | 2.5709 |
| 0.5961 | 196.88 | 12600 | 2.5713 |
| 0.5914 | 200.0 | 12800 | 2.5713 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lesson-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 2.9037 | 3.12 | 200 | 2.2456 |
| 2.5914 | 6.25 | 400 | 2.1498 |
| 2.393 | 9.38 | 600 | 2.1002 |
| 2.2409 | 12.5 | 800 | 2.0754 |
| 2.1515 | 15.62 | 1000 | 2.0683 |
| 2.0633 | 18.75 | 1200 | 2.0541 |
| 1.9418 | 21.88 | 1400 | 2.0603 |
| 1.837 | 25.0 | 1600 | 2.0788 |
| 1.7715 | 28.12 | 1800 | 2.0754 |
| 1.6957 | 31.25 | 2000 | 2.0815 |
| 1.6079 | 34.38 | 2200 | 2.0940 |
| 1.5947 | 37.5 | 2400 | 2.1094 |
| 1.4603 | 40.62 | 2600 | 2.1147 |
| 1.4621 | 43.75 | 2800 | 2.1354 |
| 1.4021 | 46.88 | 3000 | 2.1519 |
| 1.3394 | 50.0 | 3200 | 2.1670 |
| 1.2866 | 53.12 | 3400 | 2.1921 |
| 1.2681 | 56.25 | 3600 | 2.2045 |
| 1.1866 | 59.38 | 3800 | 2.2194 |
| 1.2098 | 62.5 | 4000 | 2.2302 |
| 1.1386 | 65.62 | 4200 | 2.2400 |
| 1.0853 | 68.75 | 4400 | 2.2634 |
| 1.0888 | 71.88 | 4600 | 2.2810 |
| 1.0408 | 75.0 | 4800 | 2.2909 |
| 1.0309 | 78.12 | 5000 | 2.3059 |
| 0.9523 | 81.25 | 5200 | 2.3249 |
| 0.9671 | 84.38 | 5400 | 2.3333 |
| 0.9413 | 87.5 | 5600 | 2.3543 |
| 0.9127 | 90.62 | 5800 | 2.3636 |
| 0.9095 | 93.75 | 6000 | 2.3676 |
| 0.8952 | 96.88 | 6200 | 2.3756 |
| 0.857 | 100.0 | 6400 | 2.3878 |
| 0.8474 | 103.12 | 6600 | 2.4148 |
| 0.8215 | 106.25 | 6800 | 2.4231 |
| 0.8172 | 109.38 | 7000 | 2.4243 |
| 0.7761 | 112.5 | 7200 | 2.4489 |
| 0.7737 | 115.62 | 7400 | 2.4718 |
| 0.7476 | 118.75 | 7600 | 2.4614 |
| 0.7345 | 121.88 | 7800 | 2.4705 |
| 0.7426 | 125.0 | 8000 | 2.4740 |
| 0.7151 | 128.12 | 8200 | 2.4833 |
| 0.7191 | 131.25 | 8400 | 2.4786 |
| 0.6818 | 134.38 | 8600 | 2.4882 |
| 0.6862 | 137.5 | 8800 | 2.4938 |
| 0.6929 | 140.62 | 9000 | 2.4977 |
| 0.6494 | 143.75 | 9200 | 2.5195 |
| 0.6689 | 146.88 | 9400 | 2.5185 |
| 0.6492 | 150.0 | 9600 | 2.5259 |
| 0.6384 | 153.12 | 9800 | 2.5259 |
| 0.6435 | 156.25 | 10000 | 2.5287 |
| 0.6251 | 159.38 | 10200 | 2.5284 |
| 0.6295 | 162.5 | 10400 | 2.5398 |
| 0.6324 | 165.62 | 10600 | 2.5442 |
| 0.6252 | 168.75 | 10800 | 2.5481 |
| 0.6108 | 171.88 | 11000 | 2.5455 |
| 0.6034 | 175.0 | 11200 | 2.5502 |
| 0.5969 | 178.12 | 11400 | 2.5601 |
| 0.5949 | 181.25 | 11600 | 2.5617 |
| 0.6183 | 184.38 | 11800 | 2.5679 |
| 0.5805 | 187.5 | 12000 | 2.5687 |
| 0.6032 | 190.62 | 12200 | 2.5708 |
| 0.5955 | 193.75 | 12400 | 2.5709 |
| 0.5961 | 196.88 | 12600 | 2.5713 |
| 0.5914 | 200.0 | 12800 | 2.5713 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"base_model": "t5-small", "license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "lesson-summarization", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,441 |
Cognitive-Lab/Ambari-7B-Instruct-v0.2
|
Cognitive-Lab
| null |
[
"adapter-transformers",
"pytorch",
"llama",
"kannada",
"bilingual",
"en",
"kn",
"dataset:Cognitive-Lab/Kannada-Instruct-dataset",
"license:llama2",
"region:us"
] | 2024-03-31T14:55:15Z |
2024-03-31T15:00:15+00:00
| 0 | 0 |
---
datasets:
- Cognitive-Lab/Kannada-Instruct-dataset
language:
- en
- kn
library_name: adapter-transformers
license: llama2
tags:
- kannada
- bilingual
---
# Ambari-7B-Instruct-v0.2
## Overview
Ambari-7B-Instruct-v0.1 is an extension of the Ambari series, a bilingual English/Kannada model developed and released by [Cognitivelab.in](https://www.cognitivelab.in/). This model is specialized for natural language understanding tasks, particularly in the context of instructional pairs. It is built upon the Ambari-7B-Base-v0.1 model, utilizing a fine-tuning process with a curated dataset of translated instructional pairs.
## Difference between v0.1 and v0.2
The v0.2 version was finetune on the same dataset with all the same parameters but we didnt perform vocabular expansion, it is using the default tokenizer and was trained inorder to evaluate both the models side by side.
## Usage
To use the Ambari-7B-Instruct-v0.1 model, you can follow the example code below:
```python
# Usage
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained('Cognitive-Lab/Ambari-7B-Instruct-v0.2')
tokenizer = LlamaTokenizer.from_pretrained('Cognitive-Lab/Ambari-7B-Instruct-v0.2')
prompt = "Give me 10 Study tips in Kannada."
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=1000)
decoded_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(decoded_output)
```
## Learn More
Read more about Ambari-7B-Instruct-v0.1 and its applications in natural language understanding tasks on the [Cognitivelab.in blog](https://www.cognitivelab.in/blog/introducing-ambari).
## Dataset Information
The model is fine-tuned using the Kannada Instruct Dataset, a collection of translated instructional pairs. The dataset includes English instruction and output pairs, as well as their corresponding translations in Kannada. The intentional diversification of the dataset, encompassing various language combinations, enhances the model's proficiency in cross-lingual tasks.
## Bilingual Instruct Fine-tuning
The model underwent a pivotal stage of supervised fine-tuning with low-rank adaptation, focusing on bilingual instruct fine-tuning. This approach involved training the model to respond adeptly in either English or Kannada based on the language specified in the user prompt or instruction.
## References
- [Ambari-7B-Instruct Model](https://huggingface.co/Cognitive-Lab/Ambari-7B-Instruct-v0.1)
- [Ambari-7B-Base Model](https://huggingface.co/Cognitive-Lab/Ambari-7B-base-v0.1)
- [Kannada-Instruct-Dataset](https://huggingface.co/datasets/Cognitive-Lab/Kannada-Instruct-dataset)
| null |
Non_BioNLP
|
# Ambari-7B-Instruct-v0.2
## Overview
Ambari-7B-Instruct-v0.1 is an extension of the Ambari series, a bilingual English/Kannada model developed and released by [Cognitivelab.in](https://www.cognitivelab.in/). This model is specialized for natural language understanding tasks, particularly in the context of instructional pairs. It is built upon the Ambari-7B-Base-v0.1 model, utilizing a fine-tuning process with a curated dataset of translated instructional pairs.
## Difference between v0.1 and v0.2
The v0.2 version was finetune on the same dataset with all the same parameters but we didnt perform vocabular expansion, it is using the default tokenizer and was trained inorder to evaluate both the models side by side.
## Usage
To use the Ambari-7B-Instruct-v0.1 model, you can follow the example code below:
```python
# Usage
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained('Cognitive-Lab/Ambari-7B-Instruct-v0.2')
tokenizer = LlamaTokenizer.from_pretrained('Cognitive-Lab/Ambari-7B-Instruct-v0.2')
prompt = "Give me 10 Study tips in Kannada."
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=1000)
decoded_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(decoded_output)
```
## Learn More
Read more about Ambari-7B-Instruct-v0.1 and its applications in natural language understanding tasks on the [Cognitivelab.in blog](https://www.cognitivelab.in/blog/introducing-ambari).
## Dataset Information
The model is fine-tuned using the Kannada Instruct Dataset, a collection of translated instructional pairs. The dataset includes English instruction and output pairs, as well as their corresponding translations in Kannada. The intentional diversification of the dataset, encompassing various language combinations, enhances the model's proficiency in cross-lingual tasks.
## Bilingual Instruct Fine-tuning
The model underwent a pivotal stage of supervised fine-tuning with low-rank adaptation, focusing on bilingual instruct fine-tuning. This approach involved training the model to respond adeptly in either English or Kannada based on the language specified in the user prompt or instruction.
## References
- [Ambari-7B-Instruct Model](https://huggingface.co/Cognitive-Lab/Ambari-7B-Instruct-v0.1)
- [Ambari-7B-Base Model](https://huggingface.co/Cognitive-Lab/Ambari-7B-base-v0.1)
- [Kannada-Instruct-Dataset](https://huggingface.co/datasets/Cognitive-Lab/Kannada-Instruct-dataset)
|
{"datasets": ["Cognitive-Lab/Kannada-Instruct-dataset"], "language": ["en", "kn"], "library_name": "adapter-transformers", "license": "llama2", "tags": ["kannada", "bilingual"]}
|
task
|
[
"TRANSLATION"
] | 45,442 |
Helsinki-NLP/opus-mt-tc-bible-big-inc-en
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"anp",
"as",
"awa",
"bho",
"bn",
"bpy",
"dv",
"en",
"gbm",
"gu",
"hi",
"hif",
"hne",
"hns",
"kok",
"ks",
"lah",
"mag",
"mai",
"mr",
"ne",
"or",
"pa",
"pi",
"rhg",
"rmy",
"rom",
"sa",
"sd",
"si",
"skr",
"syl",
"ur",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-08T08:46:26Z |
2024-10-08T08:46:37+00:00
| 72 | 0 |
---
language:
- anp
- as
- awa
- bho
- bn
- bpy
- dv
- en
- gbm
- gu
- hi
- hif
- hne
- hns
- kok
- ks
- lah
- mag
- mai
- mr
- ne
- or
- pa
- pi
- rhg
- rmy
- rom
- sa
- sd
- si
- skr
- syl
- ur
library_name: transformers
license: apache-2.0
tags:
- translation
- opus-mt-tc-bible
model-index:
- name: opus-mt-tc-bible-big-inc-en
results:
- task:
type: translation
name: Translation multi-eng
dataset:
name: tatoeba-test-v2020-07-28-v2023-09-26
type: tatoeba_mt
args: multi-eng
metrics:
- type: bleu
value: 44.3
name: BLEU
- type: chrf
value: 0.61252
name: chr-F
---
# opus-mt-tc-bible-big-inc-en
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Indic languages (inc) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-08-17
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): anp asm awa ben bho bpy div dty gbm guj hif hin hne hns kas kok lah mag mai mar nep npi ori pan pli rhg rmy rom san sin skr snd syl urd
- Target Language(s): eng
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/inc-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"हमें केवल सम्पूर्ण वाक्य चाहिएं।",
"टॉम ने मुझे बताया नहीं।"
]
model_name = "pytorch-models/opus-mt-tc-bible-big-inc-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# We just need the whole sentence.
# Tom didn't tell me.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-inc-en")
print(pipe("हमें केवल सम्पूर्ण वाक्य चाहिएं।"))
# expected output: We just need the whole sentence.
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/inc-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| multi-eng | tatoeba-test-v2020-07-28-v2023-09-26 | 0.61252 | 44.3 | 10000 | 64786 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 11:46:05 EEST 2024
* port machine: LM0-400-22516.local
| null |
Non_BioNLP
|
# opus-mt-tc-bible-big-inc-en
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Indic languages (inc) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-08-17
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): anp asm awa ben bho bpy div dty gbm guj hif hin hne hns kas kok lah mag mai mar nep npi ori pan pli rhg rmy rom san sin skr snd syl urd
- Target Language(s): eng
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/inc-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"हमें केवल सम्पूर्ण वाक्य चाहिएं।",
"टॉम ने मुझे बताया नहीं।"
]
model_name = "pytorch-models/opus-mt-tc-bible-big-inc-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# We just need the whole sentence.
# Tom didn't tell me.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-inc-en")
print(pipe("हमें केवल सम्पूर्ण वाक्य चाहिएं।"))
# expected output: We just need the whole sentence.
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/inc-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| multi-eng | tatoeba-test-v2020-07-28-v2023-09-26 | 0.61252 | 44.3 | 10000 | 64786 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 11:46:05 EEST 2024
* port machine: LM0-400-22516.local
|
{"language": ["anp", "as", "awa", "bho", "bn", "bpy", "dv", "en", "gbm", "gu", "hi", "hif", "hne", "hns", "kok", "ks", "lah", "mag", "mai", "mr", "ne", "or", "pa", "pi", "rhg", "rmy", "rom", "sa", "sd", "si", "skr", "syl", "ur"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "opus-mt-tc-bible"], "model-index": [{"name": "opus-mt-tc-bible-big-inc-en", "results": [{"task": {"type": "translation", "name": "Translation multi-eng"}, "dataset": {"name": "tatoeba-test-v2020-07-28-v2023-09-26", "type": "tatoeba_mt", "args": "multi-eng"}, "metrics": [{"type": "bleu", "value": 44.3, "name": "BLEU"}, {"type": "chrf", "value": 0.61252, "name": "chr-F"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,443 |
Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_2bpw_exl2
|
Zoyd
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"merges",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:quantized:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | 2024-05-26T08:59:09Z |
2024-05-26T10:10:53+00:00
| 5 | 0 |
---
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
datasets:
- teknium/OpenHermes-2.5
language:
- en
license: apache-2.0
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- merges
widget:
- example_title: Hermes 2 Pro Llama-3 Instruct Merge
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
results: []
---
**Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_25bpw_exl2)**</center> | <center>4933 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_0bpw_exl2)**</center> | <center>6497 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_5bpw_exl2)**</center> | <center>6910 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-8_0bpw_exl2)**</center> | <center>8049 MB</center> | <center>8</center> |
# - Hermes-2 Θ Llama-3 8B

## Model Description
Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:

### Chat with a Meta-Cognitive Entity

### Ask for a structured JSON output:

# Prompt Format
Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5529|± |0.0145|
| | |acc_norm|0.5870|± |0.0144|
|arc_easy | 0|acc |0.8371|± |0.0076|
| | |acc_norm|0.8144|± |0.0080|
|boolq | 1|acc |0.8599|± |0.0061|
|hellaswag | 0|acc |0.6133|± |0.0049|
| | |acc_norm|0.7989|± |0.0040|
|openbookqa | 0|acc |0.3940|± |0.0219|
| | |acc_norm|0.4680|± |0.0223|
|piqa | 0|acc |0.8063|± |0.0092|
| | |acc_norm|0.8156|± |0.0090|
|winogrande | 0|acc |0.7372|± |0.0124|
```
Average: 72.59
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2441|± |0.0270|
|agieval_logiqa_en | 0|acc |0.3687|± |0.0189|
| | |acc_norm|0.3840|± |0.0191|
|agieval_lsat_ar | 0|acc |0.2304|± |0.0278|
| | |acc_norm|0.2174|± |0.0273|
|agieval_lsat_lr | 0|acc |0.5471|± |0.0221|
| | |acc_norm|0.5373|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.7670|± |0.0295|
| | |acc_norm|0.7379|± |0.0307|
|agieval_sat_en_without_passage| 0|acc |0.4417|± |0.0347|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_math | 0|acc |0.4000|± |0.0331|
| | |acc_norm|0.3455|± |0.0321|
```
Average: 44.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6000|± |0.0356|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6585|± |0.0247|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2340|± |0.0224|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|± |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2057|± |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5367|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4040|± |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.4970|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7075|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4821|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2295|± |0.0133|
|bigbench_snarks | 0|multiple_choice_grade|0.6906|± |0.0345|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5375|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.6270|± |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2216|± |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1594|± |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5367|± |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
| null |
Non_BioNLP
|
**Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_25bpw_exl2)**</center> | <center>4933 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_0bpw_exl2)**</center> | <center>6497 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_5bpw_exl2)**</center> | <center>6910 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-8_0bpw_exl2)**</center> | <center>8049 MB</center> | <center>8</center> |
# - Hermes-2 Θ Llama-3 8B

## Model Description
Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:

### Chat with a Meta-Cognitive Entity

### Ask for a structured JSON output:

# Prompt Format
Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5529|± |0.0145|
| | |acc_norm|0.5870|± |0.0144|
|arc_easy | 0|acc |0.8371|± |0.0076|
| | |acc_norm|0.8144|± |0.0080|
|boolq | 1|acc |0.8599|± |0.0061|
|hellaswag | 0|acc |0.6133|± |0.0049|
| | |acc_norm|0.7989|± |0.0040|
|openbookqa | 0|acc |0.3940|± |0.0219|
| | |acc_norm|0.4680|± |0.0223|
|piqa | 0|acc |0.8063|± |0.0092|
| | |acc_norm|0.8156|± |0.0090|
|winogrande | 0|acc |0.7372|± |0.0124|
```
Average: 72.59
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2441|± |0.0270|
|agieval_logiqa_en | 0|acc |0.3687|± |0.0189|
| | |acc_norm|0.3840|± |0.0191|
|agieval_lsat_ar | 0|acc |0.2304|± |0.0278|
| | |acc_norm|0.2174|± |0.0273|
|agieval_lsat_lr | 0|acc |0.5471|± |0.0221|
| | |acc_norm|0.5373|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.7670|± |0.0295|
| | |acc_norm|0.7379|± |0.0307|
|agieval_sat_en_without_passage| 0|acc |0.4417|± |0.0347|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_math | 0|acc |0.4000|± |0.0331|
| | |acc_norm|0.3455|± |0.0321|
```
Average: 44.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6000|± |0.0356|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6585|± |0.0247|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2340|± |0.0224|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|± |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2057|± |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5367|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4040|± |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.4970|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7075|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4821|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2295|± |0.0133|
|bigbench_snarks | 0|multiple_choice_grade|0.6906|± |0.0345|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5375|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.6270|± |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2216|± |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1594|± |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5367|± |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
|
{"base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B", "datasets": ["teknium/OpenHermes-2.5"], "language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges"], "widget": [{"example_title": "Hermes 2 Pro Llama-3 Instruct Merge", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-Instruct-8B-Merge", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,444 |
michaelfeil/ct2fast-gpt-j-6b
|
michaelfeil
| null |
[
"transformers",
"ctranslate2",
"int8",
"float16",
"pytorch",
"causal-lm",
"en",
"dataset:the_pile",
"arxiv:2104.09864",
"arxiv:2101.00027",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-05-01T15:17:42Z |
2023-05-19T18:31:51+00:00
| 12 | 1 |
---
datasets:
- the_pile
language:
- en
license: apache-2.0
tags:
- ctranslate2
- int8
- float16
- pytorch
- causal-lm
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b)
```bash
pip install hf-hub-ctranslate2>=2.0.6
```
Converted on 2023-05-19 using
```
ct2-transformers-converter --model EleutherAI/gpt-j-6b --output_dir /home/feil_m/tmp-ct2fast-gpt-j-6b --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-gpt-j-6b"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6b")
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to
extract features useful for downstream tasks. The model is best at what it was
pretrained for however, which is generating text from a prompt.
### Out-of-scope use
GPT-J-6B is **not** intended for deployment without fine-tuning, supervision,
and/or moderation. It is not a in itself a product and cannot be used for
human-facing interactions. For example, the model may generate harmful or
offensive text. Please evaluate the risks associated with your particular use case.
GPT-J-6B was trained on an English-language only dataset, and is thus **not**
suitable for translation or generating text in other languages.
GPT-J-6B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means GPT-J-6B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
| null |
Non_BioNLP
|
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b)
```bash
pip install hf-hub-ctranslate2>=2.0.6
```
Converted on 2023-05-19 using
```
ct2-transformers-converter --model EleutherAI/gpt-j-6b --output_dir /home/feil_m/tmp-ct2fast-gpt-j-6b --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-gpt-j-6b"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6b")
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to
extract features useful for downstream tasks. The model is best at what it was
pretrained for however, which is generating text from a prompt.
### Out-of-scope use
GPT-J-6B is **not** intended for deployment without fine-tuning, supervision,
and/or moderation. It is not a in itself a product and cannot be used for
human-facing interactions. For example, the model may generate harmful or
offensive text. Please evaluate the risks associated with your particular use case.
GPT-J-6B was trained on an English-language only dataset, and is thus **not**
suitable for translation or generating text in other languages.
GPT-J-6B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means GPT-J-6B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
|
{"datasets": ["the_pile"], "language": ["en"], "license": "apache-2.0", "tags": ["ctranslate2", "int8", "float16", "pytorch", "causal-lm"]}
|
task
|
[
"TRANSLATION"
] | 45,445 |
aehrm/redewiedergabe-reported
|
aehrm
|
token-classification
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"region:us"
] | 2023-05-16T21:57:22Z |
2023-08-23T14:12:28+00:00
| 11 | 0 |
---
language: de
tags:
- flair
- token-classification
- sequence-tagger-model
---
# REDEWIEDERGABE Tagger: reported STWR
This model is part of an ensemble of binary taggers that recognize German speech, thought and writing representation (STWR), that is being used in [LLpro](https://github.com/cophi-wue/LLpro). They can be used to automatically detect and annotate the following 4 types of speech, thought and writing representation in German texts:
| STWR type | Example | Translation |
|--------------------------------|-------------------------------------------------------------------------|----------------------------------------------------------|
| direct | Dann sagte er: **"Ich habe Hunger."** | Then he said: **"I'm hungry."** |
| free indirect ('erlebte Rede') | Er war ratlos. **Woher sollte er denn hier bloß ein Mittagessen bekommen?** | He was at a loss. **Where should he ever find lunch here?** |
| indirect | Sie fragte, **wo das Essen sei.** | She asked **where the food was.** |
| reported (**this tagger**) | **Sie sprachen über das Mittagessen.** | **They talked about lunch.** |
The ensemble is trained on the [REDEWIEDERGABE corpus](https://github.com/redewiedergabe/corpus) ([Annotation guidelines](http://redewiedergabe.de/richtlinien/richtlinien.html)), fine-tuning each tagger on the domain-adapted [lkonle/fiction-gbert-large](https://huggingface.co/lkonle/fiction-gbert-large). ([Training Code](https://github.com/cophi-wue/LLpro/blob/main/contrib/train_redewiedergabe.py))
**F1-Scores:**
| STWR type | F1-Score |
|-----------|-----------|
| direct | 90.76 |
| indirect | 79.16 |
| free indirect | 58.00 |
| **reported (this tagger)** | **70.47** |
----
**Demo Usage:**
```python
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence('Sie sprachen über das Mittagessen. Sie fragte, wo das Essen sei. Woher sollte er das wissen? Dann sagte er: "Ich habe Hunger."')
rwtypes = ['direct', 'indirect', 'freeindirect', 'reported']
for rwtype in rwtypes:
model = SequenceTagger.load(f'aehrm/redewiedergabe-{rwtype}')
model.predict(sentence)
print(rwtype, [ x.data_point.text for x in sentence.get_labels() ])
# >>> direct ['"', 'Ich', 'habe', 'Hunger', '.', '"']
# >>> indirect ['wo', 'das', 'Essen', 'sei', '.']
# >>> freeindirect ['Woher', 'sollte', 'er', 'das', 'wissen', '?']
# >>> reported ['Sie', 'sprachen', 'über', 'das', 'Mittagessen', '.', 'Woher', 'sollte', 'er', 'das', 'wissen', '?']
```
**Cite**:
Please cite the following paper when using this model.
```
@inproceedings{ehrmanntraut-et-al-llpro-2023,
address = {Ingolstadt, Germany},
title = {{LLpro}: A Literary Language Processing Pipeline for {German} Narrative Text},
booktitle = {Proceedings of the 10th Conference on Natural Language Processing ({KONVENS} 2022)},
publisher = {{KONVENS} 2023 Organizers},
author = {Ehrmanntraut, Anton and Konle, Leonard and Jannidis, Fotis},
year = {2023},
}
```
| null |
Non_BioNLP
|
# REDEWIEDERGABE Tagger: reported STWR
This model is part of an ensemble of binary taggers that recognize German speech, thought and writing representation (STWR), that is being used in [LLpro](https://github.com/cophi-wue/LLpro). They can be used to automatically detect and annotate the following 4 types of speech, thought and writing representation in German texts:
| STWR type | Example | Translation |
|--------------------------------|-------------------------------------------------------------------------|----------------------------------------------------------|
| direct | Dann sagte er: **"Ich habe Hunger."** | Then he said: **"I'm hungry."** |
| free indirect ('erlebte Rede') | Er war ratlos. **Woher sollte er denn hier bloß ein Mittagessen bekommen?** | He was at a loss. **Where should he ever find lunch here?** |
| indirect | Sie fragte, **wo das Essen sei.** | She asked **where the food was.** |
| reported (**this tagger**) | **Sie sprachen über das Mittagessen.** | **They talked about lunch.** |
The ensemble is trained on the [REDEWIEDERGABE corpus](https://github.com/redewiedergabe/corpus) ([Annotation guidelines](http://redewiedergabe.de/richtlinien/richtlinien.html)), fine-tuning each tagger on the domain-adapted [lkonle/fiction-gbert-large](https://huggingface.co/lkonle/fiction-gbert-large). ([Training Code](https://github.com/cophi-wue/LLpro/blob/main/contrib/train_redewiedergabe.py))
**F1-Scores:**
| STWR type | F1-Score |
|-----------|-----------|
| direct | 90.76 |
| indirect | 79.16 |
| free indirect | 58.00 |
| **reported (this tagger)** | **70.47** |
----
**Demo Usage:**
```python
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence('Sie sprachen über das Mittagessen. Sie fragte, wo das Essen sei. Woher sollte er das wissen? Dann sagte er: "Ich habe Hunger."')
rwtypes = ['direct', 'indirect', 'freeindirect', 'reported']
for rwtype in rwtypes:
model = SequenceTagger.load(f'aehrm/redewiedergabe-{rwtype}')
model.predict(sentence)
print(rwtype, [ x.data_point.text for x in sentence.get_labels() ])
# >>> direct ['"', 'Ich', 'habe', 'Hunger', '.', '"']
# >>> indirect ['wo', 'das', 'Essen', 'sei', '.']
# >>> freeindirect ['Woher', 'sollte', 'er', 'das', 'wissen', '?']
# >>> reported ['Sie', 'sprachen', 'über', 'das', 'Mittagessen', '.', 'Woher', 'sollte', 'er', 'das', 'wissen', '?']
```
**Cite**:
Please cite the following paper when using this model.
```
@inproceedings{ehrmanntraut-et-al-llpro-2023,
address = {Ingolstadt, Germany},
title = {{LLpro}: A Literary Language Processing Pipeline for {German} Narrative Text},
booktitle = {Proceedings of the 10th Conference on Natural Language Processing ({KONVENS} 2022)},
publisher = {{KONVENS} 2023 Organizers},
author = {Ehrmanntraut, Anton and Konle, Leonard and Jannidis, Fotis},
year = {2023},
}
```
|
{"language": "de", "tags": ["flair", "token-classification", "sequence-tagger-model"]}
|
task
|
[
"TRANSLATION"
] | 45,446 |
gaito-20/test-trainer
|
gaito-20
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-24T21:28:36Z |
2023-10-28T12:54:32+00:00
| 101 | 0 |
---
base_model: bert-base-uncased
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: test-trainer
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- type: accuracy
value: 0.8480392156862745
name: Accuracy
- type: f1
value: 0.8916083916083916
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5841
- Accuracy: 0.8480
- F1: 0.8916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4703 | 0.7794 | 0.8308 |
| 0.5511 | 2.0 | 918 | 0.6611 | 0.7451 | 0.8424 |
| 0.3958 | 3.0 | 1377 | 0.5841 | 0.8480 | 0.8916 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5841
- Accuracy: 0.8480
- F1: 0.8916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4703 | 0.7794 | 0.8308 |
| 0.5511 | 2.0 | 918 | 0.6611 | 0.7451 | 0.8424 |
| 0.3958 | 3.0 | 1377 | 0.5841 | 0.8480 | 0.8916 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
{"base_model": "bert-base-uncased", "datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "test-trainer", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "mrpc", "split": "validation", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8480392156862745, "name": "Accuracy"}, {"type": "f1", "value": 0.8916083916083916, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,447 |
TransferGraph/crcb_isear_bert-finetuned-lora-glue_cola
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:glue",
"base_model:crcb/isear_bert",
"base_model:adapter:crcb/isear_bert",
"model-index",
"region:us"
] | 2024-02-28T01:13:35Z |
2024-02-28T01:13:38+00:00
| 0 | 0 |
---
base_model: crcb/isear_bert
datasets:
- glue
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: crcb_isear_bert-finetuned-lora-glue_cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- type: accuracy
value: 0.7871524448705657
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# crcb_isear_bert-finetuned-lora-glue_cola
This model is a fine-tuned version of [crcb/isear_bert](https://huggingface.co/crcb/isear_bert) on the glue dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4247 | None | 0 |
| 0.7133 | 0.6103 | 0 |
| 0.7699 | 0.5025 | 1 |
| 0.7776 | 0.4444 | 2 |
| 0.7872 | 0.4309 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# crcb_isear_bert-finetuned-lora-glue_cola
This model is a fine-tuned version of [crcb/isear_bert](https://huggingface.co/crcb/isear_bert) on the glue dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4247 | None | 0 |
| 0.7133 | 0.6103 | 0 |
| 0.7699 | 0.5025 | 1 |
| 0.7776 | 0.4444 | 2 |
| 0.7872 | 0.4309 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "crcb/isear_bert", "datasets": ["glue"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "crcb_isear_bert-finetuned-lora-glue_cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "accuracy", "value": 0.7871524448705657, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,448 |
davidkim205/komt-llama2-7b-v1
|
davidkim205
|
text-generation
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"llama-2-chat",
"en",
"ko",
"arxiv:2308.06502",
"arxiv:2308.06259",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 2023-09-16T09:17:34Z |
2024-04-01T05:10:39+00:00
| 44 | 3 |
---
language:
- en
- ko
license: apache-2.0
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- llama-2-chat
inference: false
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **Model Architecture** : komt-llama-2-7b is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning by multi-task instruction
## Dataset
korean multi-task instruction dataset
## Hardware and Software
- nvidia driver : 535.54.03
- CUDA Version: 12.2
## Training
Refer https://github.com/davidkim205/komt
## Usage
```
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import TextStreamer, GenerationConfig
model_name='davidkim205/komt-llama2-7b-v1'
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
streamer = TextStreamer(tokenizer)
def gen(x):
generation_config = GenerationConfig(
temperature=0.8,
top_p=0.8,
top_k=100,
max_new_tokens=512,
early_stopping=True,
do_sample=True,
)
q = f"### instruction: {x}\n\n### Response: "
gened = model.generate(
**tokenizer(
q,
return_tensors='pt',
return_token_type_ids=False
).to('cuda'),
generation_config=generation_config,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
result_str = tokenizer.decode(gened[0])
start_tag = f"\n\n### Response: "
start_index = result_str.find(start_tag)
if start_index != -1:
result_str = result_str[start_index + len(start_tag):].strip()
return result_str
print(gen('제주도를 1박2일로 혼자 여행하려고 하는데 여행 코스를 만들어줘'))
```
output
```
### Response: 제주도를 1박2일로 혼자 여행하려면 다음과 같은 여행 코스를 만들어 계획할 수 있습니다:
1일차:
- 아침: 제주도의 아름다운 해변을 구경하기 위해 해변에 도착하세요. 일출을 감상하며 자연의 아름다움을 만끽하세요.
- 오후: 제주도의 대표적인 관광지인 한라산을 탐험하세요. 등산로를 따라 올라가면서 경치를 즐기고 설명을 듣으며 쉬운 산책을 즐기세요.
- 저녁: 제주도의 맛있는 음식점에서 저녁을 보내세요. 신선한 해산물과 향신료로 만든 음식을 맛보는 것은 제주도 여행의 완벽한 경험이 될 것입니다.
2일차:
- 아침: 한라산 일대를 탐험하기 위해 한라산 케이프로 이동하세요. 이 케이프는 등산을 즐기는 사람들에게 최적의 선택입니다.
```
## Evaluation
For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) .
| model | score | average(0~5) | percentage |
| --------------------------------------- | ------- | ------------ | ---------- |
| gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% |
| naver Cue(close) | 140 | 3.78 | 75.67% |
| clova X(close) | 136 | 3.67 | 73.51% |
| WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% |
| Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% |
| Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% |
| nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% |
| kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% |
| beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% |
| **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** |
| **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
------------------------------------------------
# Original model card: Meta's Llama 2 7B-chat
Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>
Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>
Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>
**Llama 2 family of models.** Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. The 70B version uses Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "Llama-2: Open Foundation and Fine-tuned Chat Models", available at https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/.
**Where to send questions or comments about the model** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md).
# **Intended Use**
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
# **Hardware and Software**
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
# **Training Data**
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
# **Evaluation Results**
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.
For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
# **Ethical Considerations and Limitations**
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide/)
| null |
Non_BioNLP
|
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **Model Architecture** : komt-llama-2-7b is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning by multi-task instruction
## Dataset
korean multi-task instruction dataset
## Hardware and Software
- nvidia driver : 535.54.03
- CUDA Version: 12.2
## Training
Refer https://github.com/davidkim205/komt
## Usage
```
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import TextStreamer, GenerationConfig
model_name='davidkim205/komt-llama2-7b-v1'
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
streamer = TextStreamer(tokenizer)
def gen(x):
generation_config = GenerationConfig(
temperature=0.8,
top_p=0.8,
top_k=100,
max_new_tokens=512,
early_stopping=True,
do_sample=True,
)
q = f"### instruction: {x}\n\n### Response: "
gened = model.generate(
**tokenizer(
q,
return_tensors='pt',
return_token_type_ids=False
).to('cuda'),
generation_config=generation_config,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
result_str = tokenizer.decode(gened[0])
start_tag = f"\n\n### Response: "
start_index = result_str.find(start_tag)
if start_index != -1:
result_str = result_str[start_index + len(start_tag):].strip()
return result_str
print(gen('제주도를 1박2일로 혼자 여행하려고 하는데 여행 코스를 만들어줘'))
```
output
```
### Response: 제주도를 1박2일로 혼자 여행하려면 다음과 같은 여행 코스를 만들어 계획할 수 있습니다:
1일차:
- 아침: 제주도의 아름다운 해변을 구경하기 위해 해변에 도착하세요. 일출을 감상하며 자연의 아름다움을 만끽하세요.
- 오후: 제주도의 대표적인 관광지인 한라산을 탐험하세요. 등산로를 따라 올라가면서 경치를 즐기고 설명을 듣으며 쉬운 산책을 즐기세요.
- 저녁: 제주도의 맛있는 음식점에서 저녁을 보내세요. 신선한 해산물과 향신료로 만든 음식을 맛보는 것은 제주도 여행의 완벽한 경험이 될 것입니다.
2일차:
- 아침: 한라산 일대를 탐험하기 위해 한라산 케이프로 이동하세요. 이 케이프는 등산을 즐기는 사람들에게 최적의 선택입니다.
```
## Evaluation
For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) .
| model | score | average(0~5) | percentage |
| --------------------------------------- | ------- | ------------ | ---------- |
| gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% |
| naver Cue(close) | 140 | 3.78 | 75.67% |
| clova X(close) | 136 | 3.67 | 73.51% |
| WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% |
| Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% |
| Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% |
| nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% |
| kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% |
| beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% |
| **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** |
| **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
------------------------------------------------
# Original model card: Meta's Llama 2 7B-chat
Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>
Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>
Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>
**Llama 2 family of models.** Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. The 70B version uses Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "Llama-2: Open Foundation and Fine-tuned Chat Models", available at https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/.
**Where to send questions or comments about the model** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md).
# **Intended Use**
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
# **Hardware and Software**
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
# **Training Data**
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
# **Evaluation Results**
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.
For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
# **Ethical Considerations and Limitations**
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide/)
|
{"language": ["en", "ko"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-2", "llama-2-chat"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 45,450 |
Danielbrdz/Barcenas-9b
|
Danielbrdz
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:yahma/alpaca-cleaned",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-19T17:06:12Z |
2024-05-19T18:03:55+00:00
| 10 | 0 |
---
datasets:
- yahma/alpaca-cleaned
language:
- en
license: apache-2.0
---
Barcenas 9B
Barcenas 9B is a powerful language model based on 01-ai/Yi-1.5-9B-Chat and fine-tuned with data from yahma/alpaca-cleaned. This AI model is designed to provide coherent and detailed responses for natural language processing (NLP) tasks.
Key Features
Model Size: With 9 billion parameters, Barcenas 9B can handle complex tasks and deliver high-quality responses.
Model Base: Derived from the 01-ai/Yi-1.5-9B-Chat model, known for its ability to maintain fluid and natural conversations.
Additional Training: Fine-tuned with data from yahma/alpaca-cleaned, enhancing its ability to understand and generate natural language accurately.
Applications
Barcenas 9B is ideal for a wide range of applications, including but not limited to:
Virtual Assistants: Provides quick and accurate responses in customer service and personal assistant systems.
Content Generation: Useful for creating articles, blogs, and other written content.
Sentiment Analysis: Capable of interpreting and analyzing emotions in texts, aiding in market research and social media analysis.
Machine Translation: Facilitates text translation with high accuracy and contextual coherence.
Training and Fine-Tuning
The model was initially trained using the robust and versatile 01-ai/Yi-1.5-9B-Chat, known for its performance in conversational tasks. It was then fine-tuned with the clean and curated data from yahma/alpaca-cleaned, significantly enhancing its ability to understand and generate more natural and contextually appropriate responses.
Benefits
High Performance: With a large number of parameters and high-quality training data, Barcenas 9B offers exceptional performance in NLP tasks.
Versatility: Adaptable to multiple domains and applications, from customer service to creative content generation.
Improved Accuracy: Fine-tuning with specific data ensures higher accuracy and relevance in the generated responses.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
| null |
Non_BioNLP
|
Barcenas 9B
Barcenas 9B is a powerful language model based on 01-ai/Yi-1.5-9B-Chat and fine-tuned with data from yahma/alpaca-cleaned. This AI model is designed to provide coherent and detailed responses for natural language processing (NLP) tasks.
Key Features
Model Size: With 9 billion parameters, Barcenas 9B can handle complex tasks and deliver high-quality responses.
Model Base: Derived from the 01-ai/Yi-1.5-9B-Chat model, known for its ability to maintain fluid and natural conversations.
Additional Training: Fine-tuned with data from yahma/alpaca-cleaned, enhancing its ability to understand and generate natural language accurately.
Applications
Barcenas 9B is ideal for a wide range of applications, including but not limited to:
Virtual Assistants: Provides quick and accurate responses in customer service and personal assistant systems.
Content Generation: Useful for creating articles, blogs, and other written content.
Sentiment Analysis: Capable of interpreting and analyzing emotions in texts, aiding in market research and social media analysis.
Machine Translation: Facilitates text translation with high accuracy and contextual coherence.
Training and Fine-Tuning
The model was initially trained using the robust and versatile 01-ai/Yi-1.5-9B-Chat, known for its performance in conversational tasks. It was then fine-tuned with the clean and curated data from yahma/alpaca-cleaned, significantly enhancing its ability to understand and generate more natural and contextually appropriate responses.
Benefits
High Performance: With a large number of parameters and high-quality training data, Barcenas 9B offers exceptional performance in NLP tasks.
Versatility: Adaptable to multiple domains and applications, from customer service to creative content generation.
Improved Accuracy: Fine-tuning with specific data ensures higher accuracy and relevance in the generated responses.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
|
{"datasets": ["yahma/alpaca-cleaned"], "language": ["en"], "license": "apache-2.0"}
|
task
|
[
"TRANSLATION"
] | 45,451 |
RichardErkhov/FinchResearch_-_SiLM-3b-v2-8bits
|
RichardErkhov
| null |
[
"safetensors",
"gpt_neox",
"8-bit",
"bitsandbytes",
"region:us"
] | 2025-01-11T08:33:29Z |
2025-01-11T08:35:04+00:00
| 5 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SiLM-3b-v2 - bnb 8bits
- Model creator: https://huggingface.co/FinchResearch/
- Original model: https://huggingface.co/FinchResearch/SiLM-3b-v2/
Original model description:
---
license: apache-2.0
datasets:
- FinchResearch/AboveTheClouds
language:
- en
---
# SiLM Model Card
## 1. Model Details
- **Model Name**: SiLM (Semantic Inference Language Model)
- **Version**: 1.0
- **Model Type**: Language Model
## 2. Overview
SiLM (Semantic Inference Language Model) is a state-of-the-art language model developed by [Your Organization/Research Team Name] to perform semantic inference tasks. It is designed to generate responses to prompts with a focus on understanding and inferring the underlying meaning of the input. SiLM has been fine-tuned on a diverse and extensive dataset known as the "AboveTheClouds" dataset, which provides a wide range of linguistic patterns and domains.
## 3. Dataset Information
### 3.1. AboveTheClouds Dataset
- **Dataset Source**: FinchResearch
- **Description**: The AboveTheClouds dataset is a comprehensive and diverse collection of text data from various sources, including books, articles, websites, and more. This dataset serves as the foundation for fine-tuning SiLM, ensuring that the model is exposed to a broad range of linguistic patterns and domains. It includes a vast amount of text data to train SiLM effectively in understanding semantic relationships and making accurate inferences.
## 4. Model Capabilities
SiLM is designed to excel in semantic inference tasks. It understands and generates responses based on the input prompts using the following template:
```
### Human: {prompt}
### Assistant:
```
## Some of the key capabilities and use cases of SiLM include:
- Semantic Understanding: SiLM can comprehend the semantic context of input prompts and generate coherent and contextually relevant responses.
- Natural Language Generation: It is capable of generating human-like text responses that are contextually appropriate and grammatically correct.
- Inference and Reasoning: SiLM can make inferences based on the information provided in the prompt, making it suitable for tasks involving reasoning and deduction.
- Question Answering: SiLM can answer questions, provide explanations, and generate informative responses to queries.
- Content Generation: It can be used to generate content for a wide range of applications, including chatbots, virtual assistants, and content creation tools.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SiLM-3b-v2 - bnb 8bits
- Model creator: https://huggingface.co/FinchResearch/
- Original model: https://huggingface.co/FinchResearch/SiLM-3b-v2/
Original model description:
---
license: apache-2.0
datasets:
- FinchResearch/AboveTheClouds
language:
- en
---
# SiLM Model Card
## 1. Model Details
- **Model Name**: SiLM (Semantic Inference Language Model)
- **Version**: 1.0
- **Model Type**: Language Model
## 2. Overview
SiLM (Semantic Inference Language Model) is a state-of-the-art language model developed by [Your Organization/Research Team Name] to perform semantic inference tasks. It is designed to generate responses to prompts with a focus on understanding and inferring the underlying meaning of the input. SiLM has been fine-tuned on a diverse and extensive dataset known as the "AboveTheClouds" dataset, which provides a wide range of linguistic patterns and domains.
## 3. Dataset Information
### 3.1. AboveTheClouds Dataset
- **Dataset Source**: FinchResearch
- **Description**: The AboveTheClouds dataset is a comprehensive and diverse collection of text data from various sources, including books, articles, websites, and more. This dataset serves as the foundation for fine-tuning SiLM, ensuring that the model is exposed to a broad range of linguistic patterns and domains. It includes a vast amount of text data to train SiLM effectively in understanding semantic relationships and making accurate inferences.
## 4. Model Capabilities
SiLM is designed to excel in semantic inference tasks. It understands and generates responses based on the input prompts using the following template:
```
### Human: {prompt}
### Assistant:
```
## Some of the key capabilities and use cases of SiLM include:
- Semantic Understanding: SiLM can comprehend the semantic context of input prompts and generate coherent and contextually relevant responses.
- Natural Language Generation: It is capable of generating human-like text responses that are contextually appropriate and grammatically correct.
- Inference and Reasoning: SiLM can make inferences based on the information provided in the prompt, making it suitable for tasks involving reasoning and deduction.
- Question Answering: SiLM can answer questions, provide explanations, and generate informative responses to queries.
- Content Generation: It can be used to generate content for a wide range of applications, including chatbots, virtual assistants, and content creation tools.
|
{}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,452 |
milistu/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss_v1
|
milistu
|
text-classification
|
[
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"text-classification",
"generated_from_trainer",
"dataset_size:82326",
"loss:LambdaLoss",
"en",
"dataset:microsoft/ms_marco",
"arxiv:1908.10084",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"region:us"
] | 2025-02-18T10:38:07Z |
2025-04-02T19:02:16+00:00
| 30 | 0 |
---
base_model: microsoft/MiniLM-L12-H384-uncased
datasets:
- microsoft/ms_marco
language:
- en
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
pipeline_tag: text-classification
tags:
- sentence-transformers
- cross-encoder
- text-classification
- generated_from_trainer
- dataset_size:82326
- loss:LambdaLoss
model-index:
- name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
results: []
---
# CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("Studeni/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
* Evaluated with [<code>CERerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CERerankingEvaluator)
| Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.5185 (+0.0289) | 0.3307 (+0.0603) | 0.5630 (+0.1423) |
| mrr@10 | 0.5102 (+0.0327) | 0.5466 (+0.0468) | 0.5730 (+0.1464) |
| **ndcg@10** | **0.5876 (+0.0472)** | **0.3699 (+0.0449)** | **0.6260 (+0.1253)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>CENanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CENanoBEIREvaluator)
| Metric | Value |
|:------------|:---------------------|
| map | 0.4707 (+0.0772) |
| mrr@10 | 0.5433 (+0.0753) |
| **ndcg@10** | **0.5278 (+0.0725)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ms_marco
* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
* Size: 82,326 training samples
* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | query | docs | labels |
|:--------|:----------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
| type | string | list | list |
| details | <ul><li>min: 9 characters</li><li>mean: 34.34 characters</li><li>max: 91 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |
* Samples:
| query | docs | labels |
|:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>what does tolterodine do</code> | <code>['Tolterodine (Detrol, Detrusitol) is an antimuscarinic drug that is used for symptomatic treatment of urinary incontinence. It is marketed by Pfizer in Canada and the United States by its brand name Detrol. In Egypt it is also found under the trade names Tolterodine by Sabaa and Incont L.A. by Adwia. Detrusor overactivity (DO, contraction of the muscular bladder wall) is the most common form of UI in older adults. It is characterized by uninhibited bladder contractions causing an uncontrollable urge to void. Urinary frequency, urge incontinence and nocturnal incontinence occur.', 'Tolterodine reduces spasms of the bladder muscles. Tolterodine is used to treat overactive bladder with symptoms of urinary frequency, urgency, and incontinence. Tolterodine may also be used for purposes not listed in this medication guide. You should not take this medication if you are allergic to tolterodine or fesoterodine (Toviaz), if you have untreated or uncontrolled narrow-angle glaucoma, or if you ha...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>why no dairy when taking ciprofloxacin</code> | <code>['Do not take ciprofloxacin with dairy products such as milk or yogurt, or with calcium-fortified juice. You may eat or drink these products as part of a regular meal, but do not use them alone when taking ciprofloxacin. They could make the medication less effective.', 'If your healthcare provider prescribes this medication, it is important to understand some precautions for using this drug. For instance, you should not take ciprofloxacin with dairy products alone (such as milk or yogurt) or with calcium-fortified juices (such as orange juice).', 'Do not take this medicine alone with milk, yogurt, or other dairy products. Do not drink any juice with calcium added when you take this medicine. It is okay to have dairy products or juice as part of a larger meal', 'Do not take ciprofloxacin with dairy products or calcium-fortified juice alone; you can, however, take ciprofloxacin with a meal that includes these...', 'You should not use ciprofloxacin if: 1 you are also taking tizanidine (Z...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>standard depth of countertops overhang</code> | <code>['Overhang. Countertops extend out from the face frame of the cabinets and just over the cabinet doors. This is called the overhang. Standard cabinet frames are 24 inches deep with 3/4 inch to 1 inch thick doors. Most countertops have a 1 inch overhang to make a standard depth of 25 inches. While there are many different materials to use for countertops, most come in a standard thickness of 1 1/2 inches.', 'Hanging Out on an Island. The standard overhang of an island countertop -- on the side designed to sit at and tuck stools underneath -- is 12 inches. If you plan to extend the counter farther, you need to add supports such as legs, or wood corbels or metal L-brackets that extend half the overhang’s distance.', 'The standard vanity counter top depth. Usually countertops overhang the doors by about one half of an inch. So, if your finished box size, including the door is twenty one and three quarters inches deep, then your finished top will be 22 1/4” in depth. The cut size should be ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
* Loss: [<code>LambdaLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#lambdaloss) with these parameters:
```json
{
"weighing_scheme": "LambdaRankScheme",
"k": 10,
"sigma": 1.0,
"eps": 1e-10,
"pad_value": -1,
"reduction": "mean",
"reduction_log": "binary",
"activation_fct": null
}
```
### Evaluation Dataset
#### ms_marco
* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
* Size: 82,326 evaluation samples
* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | query | docs | labels |
|:--------|:-----------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
| type | string | list | list |
| details | <ul><li>min: 11 characters</li><li>mean: 33.63 characters</li><li>max: 99 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |
* Samples:
| query | docs | labels |
|:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>define monogenic trait</code> | <code>['An allele is a version of a gene. For example, in fruitflies there is a gene which determines eye colour: one allele gives red eyes, and another gives white eyes; it is the same *gene*, just different versions of that gene. A monogenic trait is one which is encoded by a single gene. e.g. - cystic fibrosis in humans. There is a single gene which determines this trait: the wild-type allele is healthy, while the disease allele gives you cystic fibrosis', 'Abstract. Monogenic inheritance refers to genetic control of a phenotype or trait by a single gene. For a monogenic trait, mutations in one (dominant) or both (recessive) copies of the gene are sufficient for the trait to be expressed. Digenic inheritance refers to mutation on two genes interacting to cause a genetic phenotype or disease. Triallelic inheritance is a special case of digenic inheritance that requires homozygous mutations at one locus and heterozygous mutations at a second locus to express a phenotype.', 'A trait that is ...</code> | <code>[1, 1, 0, 0, 0, ...]</code> |
| <code>behavioral theory definition</code> | <code>["Not to be confused with Behavioralism. Behaviorism (or behaviourism) is an approach to psychology that focuses on an individual's behavior. It combines elements of philosophy, methodology, and psychological theory", 'The initial assumption is that behavior can be explained and further described using behavioral theories. For instance, John Watson and B.F. Skinner advocate the theory that behavior can be acquired through conditioning. Also known as general behavior theory. BEHAVIOR THEORY: Each behavioral theory is an advantage to learning, because it provides teachers with a new and different approach.. No related posts. ', 'behaviorism. noun be·hav·ior·ism. : a school of psychology that takes the objective evidence of behavior (as measured responses to stimuli) as the only concern of its research and the only basis of its theory without reference to conscious experience—compare cognitive psychology. : a school of psychology that takes the objective evidence of behavior (as measured ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>What is a disease that is pleiotropic?</code> | <code>['Unsourced material may be challenged and removed. (September 2013). Pleiotropy occurs when one gene influences two or more seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Consequently, a mutation in a pleiotropic gene may have an effect on some or all traits simultaneously. The underlying mechanism is that the gene codes for a product that is, for example, used by various cells, or has a signaling function on various targets. A classic example of pleiotropy is the human disease phenylketonuria (PKU).', 'Pleiotropic, autosomal dominant disorder affecting connective tissue: Related Diseases. Pleiotropic, autosomal dominant disorder affecting connective tissue: Pleiotropic, autosomal dominant disorder affecting connective tissue is listed as a type of (or associated with) the following medical conditions in our database: 1 Heart conditions. Office of Rare Diseases (ORD) of ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
* Loss: [<code>LambdaLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#lambdaloss) with these parameters:
```json
{
"weighing_scheme": "LambdaRankScheme",
"k": 10,
"sigma": 1.0,
"eps": 1e-10,
"pad_value": -1,
"reduction": "mean",
"reduction_log": "binary",
"activation_fct": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `torch_empty_cache_steps`: 2000
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: 2000
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_ndcg@10 | NanoNFCorpus_ndcg@10 | NanoNQ_ndcg@10 | NanoBEIR_mean_ndcg@10 |
|:----------:|:---------:|:-------------:|:---------------:|:--------------------:|:--------------------:|:--------------------:|:---------------------:|
| -1 | -1 | - | - | 0.1127 (-0.4278) | 0.2057 (-0.1193) | 0.0150 (-0.4857) | 0.1111 (-0.3443) |
| 0.0001 | 1 | 0.0767 | - | - | - | - | - |
| 0.0430 | 500 | 0.0864 | - | - | - | - | - |
| 0.0861 | 1000 | 0.0931 | - | - | - | - | - |
| 0.1291 | 1500 | 0.0896 | - | - | - | - | - |
| 0.1721 | 2000 | 0.0832 | 0.0786 | 0.4801 (-0.0603) | 0.3282 (+0.0031) | 0.5660 (+0.0654) | 0.4581 (+0.0028) |
| 0.2152 | 2500 | 0.0803 | - | - | - | - | - |
| 0.2582 | 3000 | 0.0776 | - | - | - | - | - |
| 0.3013 | 3500 | 0.0775 | - | - | - | - | - |
| 0.3443 | 4000 | 0.0761 | 0.0729 | 0.5320 (-0.0084) | 0.3207 (-0.0043) | 0.6709 (+0.1702) | 0.5079 (+0.0525) |
| 0.3873 | 4500 | 0.0769 | - | - | - | - | - |
| 0.4304 | 5000 | 0.0736 | - | - | - | - | - |
| 0.4734 | 5500 | 0.0733 | - | - | - | - | - |
| 0.5164 | 6000 | 0.0728 | 0.0717 | 0.5413 (+0.0009) | 0.3416 (+0.0165) | 0.6304 (+0.1297) | 0.5044 (+0.0491) |
| 0.5595 | 6500 | 0.0742 | - | - | - | - | - |
| 0.6025 | 7000 | 0.0716 | - | - | - | - | - |
| 0.6456 | 7500 | 0.0729 | - | - | - | - | - |
| 0.6886 | 8000 | 0.0717 | 0.0726 | 0.5766 (+0.0362) | 0.3229 (-0.0021) | 0.5439 (+0.0433) | 0.4811 (+0.0258) |
| 0.7316 | 8500 | 0.0724 | - | - | - | - | - |
| 0.7747 | 9000 | 0.0723 | - | - | - | - | - |
| 0.8177 | 9500 | 0.0696 | - | - | - | - | - |
| 0.8607 | 10000 | 0.0703 | 0.0688 | 0.5840 (+0.0436) | 0.3482 (+0.0231) | 0.6047 (+0.1040) | 0.5123 (+0.0569) |
| 0.9038 | 10500 | 0.0718 | - | - | - | - | - |
| 0.9468 | 11000 | 0.0709 | - | - | - | - | - |
| 0.9898 | 11500 | 0.0704 | - | - | - | - | - |
| 1.0329 | 12000 | 0.0666 | 0.0694 | 0.5643 (+0.0238) | 0.3048 (-0.0202) | 0.5767 (+0.0761) | 0.4819 (+0.0266) |
| 1.0759 | 12500 | 0.0665 | - | - | - | - | - |
| 1.1190 | 13000 | 0.0658 | - | - | - | - | - |
| 1.1620 | 13500 | 0.0655 | - | - | - | - | - |
| 1.2050 | 14000 | 0.0657 | 0.0698 | 0.5976 (+0.0572) | 0.3538 (+0.0287) | 0.6231 (+0.1224) | 0.5248 (+0.0695) |
| 1.2481 | 14500 | 0.0644 | - | - | - | - | - |
| 1.2911 | 15000 | 0.065 | - | - | - | - | - |
| 1.3341 | 15500 | 0.066 | - | - | - | - | - |
| 1.3772 | 16000 | 0.0649 | 0.0680 | 0.5993 (+0.0589) | 0.3362 (+0.0112) | 0.6127 (+0.1120) | 0.5161 (+0.0607) |
| 1.4202 | 16500 | 0.0655 | - | - | - | - | - |
| 1.4632 | 17000 | 0.0638 | - | - | - | - | - |
| 1.5063 | 17500 | 0.0676 | - | - | - | - | - |
| 1.5493 | 18000 | 0.0645 | 0.0672 | 0.5703 (+0.0299) | 0.3530 (+0.0280) | 0.5643 (+0.0637) | 0.4959 (+0.0405) |
| 1.5924 | 18500 | 0.0646 | - | - | - | - | - |
| 1.6354 | 19000 | 0.0636 | - | - | - | - | - |
| 1.6784 | 19500 | 0.0671 | - | - | - | - | - |
| 1.7215 | 20000 | 0.0646 | 0.0678 | 0.6072 (+0.0667) | 0.3586 (+0.0335) | 0.5840 (+0.0834) | 0.5166 (+0.0612) |
| 1.7645 | 20500 | 0.0656 | - | - | - | - | - |
| 1.8075 | 21000 | 0.0623 | - | - | - | - | - |
| 1.8506 | 21500 | 0.0649 | - | - | - | - | - |
| 1.8936 | 22000 | 0.0636 | 0.0672 | 0.5940 (+0.0536) | 0.3503 (+0.0252) | 0.5898 (+0.0891) | 0.5114 (+0.0560) |
| 1.9367 | 22500 | 0.0632 | - | - | - | - | - |
| 1.9797 | 23000 | 0.0646 | - | - | - | - | - |
| 2.0227 | 23500 | 0.0614 | - | - | - | - | - |
| 2.0658 | 24000 | 0.0572 | 0.0692 | 0.5824 (+0.0420) | 0.3678 (+0.0428) | 0.5803 (+0.0796) | 0.5102 (+0.0548) |
| 2.1088 | 24500 | 0.0568 | - | - | - | - | - |
| 2.1518 | 25000 | 0.0577 | - | - | - | - | - |
| 2.1949 | 25500 | 0.0575 | - | - | - | - | - |
| 2.2379 | 26000 | 0.0579 | 0.0704 | 0.5830 (+0.0425) | 0.3662 (+0.0411) | 0.5855 (+0.0849) | 0.5116 (+0.0562) |
| 2.2809 | 26500 | 0.0583 | - | - | - | - | - |
| 2.3240 | 27000 | 0.0572 | - | - | - | - | - |
| 2.3670 | 27500 | 0.058 | - | - | - | - | - |
| **2.4101** | **28000** | **0.0581** | **0.069** | **0.5876 (+0.0472)** | **0.3699 (+0.0449)** | **0.6260 (+0.1253)** | **0.5278 (+0.0725)** |
| 2.4531 | 28500 | 0.0563 | - | - | - | - | - |
| 2.4961 | 29000 | 0.0564 | - | - | - | - | - |
| 2.5392 | 29500 | 0.057 | - | - | - | - | - |
| 2.5822 | 30000 | 0.0568 | 0.0696 | 0.5862 (+0.0458) | 0.3753 (+0.0502) | 0.5947 (+0.0940) | 0.5187 (+0.0634) |
| 2.6252 | 30500 | 0.0574 | - | - | - | - | - |
| 2.6683 | 31000 | 0.0579 | - | - | - | - | - |
| 2.7113 | 31500 | 0.0577 | - | - | - | - | - |
| 2.7543 | 32000 | 0.056 | 0.0700 | 0.5598 (+0.0194) | 0.3712 (+0.0462) | 0.5826 (+0.0819) | 0.5045 (+0.0492) |
| 2.7974 | 32500 | 0.0579 | - | - | - | - | - |
| 2.8404 | 33000 | 0.0575 | - | - | - | - | - |
| 2.8835 | 33500 | 0.0567 | - | - | - | - | - |
| 2.9265 | 34000 | 0.0548 | 0.0700 | 0.5856 (+0.0452) | 0.3734 (+0.0484) | 0.5875 (+0.0869) | 0.5155 (+0.0601) |
| 2.9695 | 34500 | 0.059 | - | - | - | - | - |
| -1 | -1 | - | - | 0.5876 (+0.0472) | 0.3699 (+0.0449) | 0.6260 (+0.1253) | 0.5278 (+0.0725) |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### LambdaLoss
```bibtex
@article{wang2018lambdaloss,
title={The LambdaLoss Framework for Ranking Metric Optimization},
author={Wang, Xuanhui and Li, Cheng and Golbandi, Nadav and Bendersky, Michael and Najork, Marc},
journal={Proceedings of the 27th ACM International Conference on Information and Knowledge Management},
pages={1313--1322},
year={2018}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("Studeni/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
* Evaluated with [<code>CERerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CERerankingEvaluator)
| Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.5185 (+0.0289) | 0.3307 (+0.0603) | 0.5630 (+0.1423) |
| mrr@10 | 0.5102 (+0.0327) | 0.5466 (+0.0468) | 0.5730 (+0.1464) |
| **ndcg@10** | **0.5876 (+0.0472)** | **0.3699 (+0.0449)** | **0.6260 (+0.1253)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>CENanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CENanoBEIREvaluator)
| Metric | Value |
|:------------|:---------------------|
| map | 0.4707 (+0.0772) |
| mrr@10 | 0.5433 (+0.0753) |
| **ndcg@10** | **0.5278 (+0.0725)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ms_marco
* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
* Size: 82,326 training samples
* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | query | docs | labels |
|:--------|:----------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
| type | string | list | list |
| details | <ul><li>min: 9 characters</li><li>mean: 34.34 characters</li><li>max: 91 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |
* Samples:
| query | docs | labels |
|:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>what does tolterodine do</code> | <code>['Tolterodine (Detrol, Detrusitol) is an antimuscarinic drug that is used for symptomatic treatment of urinary incontinence. It is marketed by Pfizer in Canada and the United States by its brand name Detrol. In Egypt it is also found under the trade names Tolterodine by Sabaa and Incont L.A. by Adwia. Detrusor overactivity (DO, contraction of the muscular bladder wall) is the most common form of UI in older adults. It is characterized by uninhibited bladder contractions causing an uncontrollable urge to void. Urinary frequency, urge incontinence and nocturnal incontinence occur.', 'Tolterodine reduces spasms of the bladder muscles. Tolterodine is used to treat overactive bladder with symptoms of urinary frequency, urgency, and incontinence. Tolterodine may also be used for purposes not listed in this medication guide. You should not take this medication if you are allergic to tolterodine or fesoterodine (Toviaz), if you have untreated or uncontrolled narrow-angle glaucoma, or if you ha...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>why no dairy when taking ciprofloxacin</code> | <code>['Do not take ciprofloxacin with dairy products such as milk or yogurt, or with calcium-fortified juice. You may eat or drink these products as part of a regular meal, but do not use them alone when taking ciprofloxacin. They could make the medication less effective.', 'If your healthcare provider prescribes this medication, it is important to understand some precautions for using this drug. For instance, you should not take ciprofloxacin with dairy products alone (such as milk or yogurt) or with calcium-fortified juices (such as orange juice).', 'Do not take this medicine alone with milk, yogurt, or other dairy products. Do not drink any juice with calcium added when you take this medicine. It is okay to have dairy products or juice as part of a larger meal', 'Do not take ciprofloxacin with dairy products or calcium-fortified juice alone; you can, however, take ciprofloxacin with a meal that includes these...', 'You should not use ciprofloxacin if: 1 you are also taking tizanidine (Z...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>standard depth of countertops overhang</code> | <code>['Overhang. Countertops extend out from the face frame of the cabinets and just over the cabinet doors. This is called the overhang. Standard cabinet frames are 24 inches deep with 3/4 inch to 1 inch thick doors. Most countertops have a 1 inch overhang to make a standard depth of 25 inches. While there are many different materials to use for countertops, most come in a standard thickness of 1 1/2 inches.', 'Hanging Out on an Island. The standard overhang of an island countertop -- on the side designed to sit at and tuck stools underneath -- is 12 inches. If you plan to extend the counter farther, you need to add supports such as legs, or wood corbels or metal L-brackets that extend half the overhang’s distance.', 'The standard vanity counter top depth. Usually countertops overhang the doors by about one half of an inch. So, if your finished box size, including the door is twenty one and three quarters inches deep, then your finished top will be 22 1/4” in depth. The cut size should be ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
* Loss: [<code>LambdaLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#lambdaloss) with these parameters:
```json
{
"weighing_scheme": "LambdaRankScheme",
"k": 10,
"sigma": 1.0,
"eps": 1e-10,
"pad_value": -1,
"reduction": "mean",
"reduction_log": "binary",
"activation_fct": null
}
```
### Evaluation Dataset
#### ms_marco
* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
* Size: 82,326 evaluation samples
* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | query | docs | labels |
|:--------|:-----------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
| type | string | list | list |
| details | <ul><li>min: 11 characters</li><li>mean: 33.63 characters</li><li>max: 99 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |
* Samples:
| query | docs | labels |
|:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>define monogenic trait</code> | <code>['An allele is a version of a gene. For example, in fruitflies there is a gene which determines eye colour: one allele gives red eyes, and another gives white eyes; it is the same *gene*, just different versions of that gene. A monogenic trait is one which is encoded by a single gene. e.g. - cystic fibrosis in humans. There is a single gene which determines this trait: the wild-type allele is healthy, while the disease allele gives you cystic fibrosis', 'Abstract. Monogenic inheritance refers to genetic control of a phenotype or trait by a single gene. For a monogenic trait, mutations in one (dominant) or both (recessive) copies of the gene are sufficient for the trait to be expressed. Digenic inheritance refers to mutation on two genes interacting to cause a genetic phenotype or disease. Triallelic inheritance is a special case of digenic inheritance that requires homozygous mutations at one locus and heterozygous mutations at a second locus to express a phenotype.', 'A trait that is ...</code> | <code>[1, 1, 0, 0, 0, ...]</code> |
| <code>behavioral theory definition</code> | <code>["Not to be confused with Behavioralism. Behaviorism (or behaviourism) is an approach to psychology that focuses on an individual's behavior. It combines elements of philosophy, methodology, and psychological theory", 'The initial assumption is that behavior can be explained and further described using behavioral theories. For instance, John Watson and B.F. Skinner advocate the theory that behavior can be acquired through conditioning. Also known as general behavior theory. BEHAVIOR THEORY: Each behavioral theory is an advantage to learning, because it provides teachers with a new and different approach.. No related posts. ', 'behaviorism. noun be·hav·ior·ism. : a school of psychology that takes the objective evidence of behavior (as measured responses to stimuli) as the only concern of its research and the only basis of its theory without reference to conscious experience—compare cognitive psychology. : a school of psychology that takes the objective evidence of behavior (as measured ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>What is a disease that is pleiotropic?</code> | <code>['Unsourced material may be challenged and removed. (September 2013). Pleiotropy occurs when one gene influences two or more seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Consequently, a mutation in a pleiotropic gene may have an effect on some or all traits simultaneously. The underlying mechanism is that the gene codes for a product that is, for example, used by various cells, or has a signaling function on various targets. A classic example of pleiotropy is the human disease phenylketonuria (PKU).', 'Pleiotropic, autosomal dominant disorder affecting connective tissue: Related Diseases. Pleiotropic, autosomal dominant disorder affecting connective tissue: Pleiotropic, autosomal dominant disorder affecting connective tissue is listed as a type of (or associated with) the following medical conditions in our database: 1 Heart conditions. Office of Rare Diseases (ORD) of ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
* Loss: [<code>LambdaLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#lambdaloss) with these parameters:
```json
{
"weighing_scheme": "LambdaRankScheme",
"k": 10,
"sigma": 1.0,
"eps": 1e-10,
"pad_value": -1,
"reduction": "mean",
"reduction_log": "binary",
"activation_fct": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `torch_empty_cache_steps`: 2000
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: 2000
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_ndcg@10 | NanoNFCorpus_ndcg@10 | NanoNQ_ndcg@10 | NanoBEIR_mean_ndcg@10 |
|:----------:|:---------:|:-------------:|:---------------:|:--------------------:|:--------------------:|:--------------------:|:---------------------:|
| -1 | -1 | - | - | 0.1127 (-0.4278) | 0.2057 (-0.1193) | 0.0150 (-0.4857) | 0.1111 (-0.3443) |
| 0.0001 | 1 | 0.0767 | - | - | - | - | - |
| 0.0430 | 500 | 0.0864 | - | - | - | - | - |
| 0.0861 | 1000 | 0.0931 | - | - | - | - | - |
| 0.1291 | 1500 | 0.0896 | - | - | - | - | - |
| 0.1721 | 2000 | 0.0832 | 0.0786 | 0.4801 (-0.0603) | 0.3282 (+0.0031) | 0.5660 (+0.0654) | 0.4581 (+0.0028) |
| 0.2152 | 2500 | 0.0803 | - | - | - | - | - |
| 0.2582 | 3000 | 0.0776 | - | - | - | - | - |
| 0.3013 | 3500 | 0.0775 | - | - | - | - | - |
| 0.3443 | 4000 | 0.0761 | 0.0729 | 0.5320 (-0.0084) | 0.3207 (-0.0043) | 0.6709 (+0.1702) | 0.5079 (+0.0525) |
| 0.3873 | 4500 | 0.0769 | - | - | - | - | - |
| 0.4304 | 5000 | 0.0736 | - | - | - | - | - |
| 0.4734 | 5500 | 0.0733 | - | - | - | - | - |
| 0.5164 | 6000 | 0.0728 | 0.0717 | 0.5413 (+0.0009) | 0.3416 (+0.0165) | 0.6304 (+0.1297) | 0.5044 (+0.0491) |
| 0.5595 | 6500 | 0.0742 | - | - | - | - | - |
| 0.6025 | 7000 | 0.0716 | - | - | - | - | - |
| 0.6456 | 7500 | 0.0729 | - | - | - | - | - |
| 0.6886 | 8000 | 0.0717 | 0.0726 | 0.5766 (+0.0362) | 0.3229 (-0.0021) | 0.5439 (+0.0433) | 0.4811 (+0.0258) |
| 0.7316 | 8500 | 0.0724 | - | - | - | - | - |
| 0.7747 | 9000 | 0.0723 | - | - | - | - | - |
| 0.8177 | 9500 | 0.0696 | - | - | - | - | - |
| 0.8607 | 10000 | 0.0703 | 0.0688 | 0.5840 (+0.0436) | 0.3482 (+0.0231) | 0.6047 (+0.1040) | 0.5123 (+0.0569) |
| 0.9038 | 10500 | 0.0718 | - | - | - | - | - |
| 0.9468 | 11000 | 0.0709 | - | - | - | - | - |
| 0.9898 | 11500 | 0.0704 | - | - | - | - | - |
| 1.0329 | 12000 | 0.0666 | 0.0694 | 0.5643 (+0.0238) | 0.3048 (-0.0202) | 0.5767 (+0.0761) | 0.4819 (+0.0266) |
| 1.0759 | 12500 | 0.0665 | - | - | - | - | - |
| 1.1190 | 13000 | 0.0658 | - | - | - | - | - |
| 1.1620 | 13500 | 0.0655 | - | - | - | - | - |
| 1.2050 | 14000 | 0.0657 | 0.0698 | 0.5976 (+0.0572) | 0.3538 (+0.0287) | 0.6231 (+0.1224) | 0.5248 (+0.0695) |
| 1.2481 | 14500 | 0.0644 | - | - | - | - | - |
| 1.2911 | 15000 | 0.065 | - | - | - | - | - |
| 1.3341 | 15500 | 0.066 | - | - | - | - | - |
| 1.3772 | 16000 | 0.0649 | 0.0680 | 0.5993 (+0.0589) | 0.3362 (+0.0112) | 0.6127 (+0.1120) | 0.5161 (+0.0607) |
| 1.4202 | 16500 | 0.0655 | - | - | - | - | - |
| 1.4632 | 17000 | 0.0638 | - | - | - | - | - |
| 1.5063 | 17500 | 0.0676 | - | - | - | - | - |
| 1.5493 | 18000 | 0.0645 | 0.0672 | 0.5703 (+0.0299) | 0.3530 (+0.0280) | 0.5643 (+0.0637) | 0.4959 (+0.0405) |
| 1.5924 | 18500 | 0.0646 | - | - | - | - | - |
| 1.6354 | 19000 | 0.0636 | - | - | - | - | - |
| 1.6784 | 19500 | 0.0671 | - | - | - | - | - |
| 1.7215 | 20000 | 0.0646 | 0.0678 | 0.6072 (+0.0667) | 0.3586 (+0.0335) | 0.5840 (+0.0834) | 0.5166 (+0.0612) |
| 1.7645 | 20500 | 0.0656 | - | - | - | - | - |
| 1.8075 | 21000 | 0.0623 | - | - | - | - | - |
| 1.8506 | 21500 | 0.0649 | - | - | - | - | - |
| 1.8936 | 22000 | 0.0636 | 0.0672 | 0.5940 (+0.0536) | 0.3503 (+0.0252) | 0.5898 (+0.0891) | 0.5114 (+0.0560) |
| 1.9367 | 22500 | 0.0632 | - | - | - | - | - |
| 1.9797 | 23000 | 0.0646 | - | - | - | - | - |
| 2.0227 | 23500 | 0.0614 | - | - | - | - | - |
| 2.0658 | 24000 | 0.0572 | 0.0692 | 0.5824 (+0.0420) | 0.3678 (+0.0428) | 0.5803 (+0.0796) | 0.5102 (+0.0548) |
| 2.1088 | 24500 | 0.0568 | - | - | - | - | - |
| 2.1518 | 25000 | 0.0577 | - | - | - | - | - |
| 2.1949 | 25500 | 0.0575 | - | - | - | - | - |
| 2.2379 | 26000 | 0.0579 | 0.0704 | 0.5830 (+0.0425) | 0.3662 (+0.0411) | 0.5855 (+0.0849) | 0.5116 (+0.0562) |
| 2.2809 | 26500 | 0.0583 | - | - | - | - | - |
| 2.3240 | 27000 | 0.0572 | - | - | - | - | - |
| 2.3670 | 27500 | 0.058 | - | - | - | - | - |
| **2.4101** | **28000** | **0.0581** | **0.069** | **0.5876 (+0.0472)** | **0.3699 (+0.0449)** | **0.6260 (+0.1253)** | **0.5278 (+0.0725)** |
| 2.4531 | 28500 | 0.0563 | - | - | - | - | - |
| 2.4961 | 29000 | 0.0564 | - | - | - | - | - |
| 2.5392 | 29500 | 0.057 | - | - | - | - | - |
| 2.5822 | 30000 | 0.0568 | 0.0696 | 0.5862 (+0.0458) | 0.3753 (+0.0502) | 0.5947 (+0.0940) | 0.5187 (+0.0634) |
| 2.6252 | 30500 | 0.0574 | - | - | - | - | - |
| 2.6683 | 31000 | 0.0579 | - | - | - | - | - |
| 2.7113 | 31500 | 0.0577 | - | - | - | - | - |
| 2.7543 | 32000 | 0.056 | 0.0700 | 0.5598 (+0.0194) | 0.3712 (+0.0462) | 0.5826 (+0.0819) | 0.5045 (+0.0492) |
| 2.7974 | 32500 | 0.0579 | - | - | - | - | - |
| 2.8404 | 33000 | 0.0575 | - | - | - | - | - |
| 2.8835 | 33500 | 0.0567 | - | - | - | - | - |
| 2.9265 | 34000 | 0.0548 | 0.0700 | 0.5856 (+0.0452) | 0.3734 (+0.0484) | 0.5875 (+0.0869) | 0.5155 (+0.0601) |
| 2.9695 | 34500 | 0.059 | - | - | - | - | - |
| -1 | -1 | - | - | 0.5876 (+0.0472) | 0.3699 (+0.0449) | 0.6260 (+0.1253) | 0.5278 (+0.0725) |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### LambdaLoss
```bibtex
@article{wang2018lambdaloss,
title={The LambdaLoss Framework for Ranking Metric Optimization},
author={Wang, Xuanhui and Li, Cheng and Golbandi, Nadav and Bendersky, Michael and Najork, Marc},
journal={Proceedings of the 27th ACM International Conference on Information and Knowledge Management},
pages={1313--1322},
year={2018}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "microsoft/MiniLM-L12-H384-uncased", "datasets": ["microsoft/ms_marco"], "language": ["en"], "library_name": "sentence-transformers", "metrics": ["map", "mrr@10", "ndcg@10"], "pipeline_tag": "text-ranking", "tags": ["sentence-transformers", "cross-encoder", "text-classification", "generated_from_trainer", "dataset_size:82326", "loss:LambdaLoss"], "model-index": [{"name": "CrossEncoder based on microsoft/MiniLM-L12-H384-uncased", "results": []}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,453 |
TheBloke/nucleus-22B-token-500B-GPTQ
|
TheBloke
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"base_model:NucleusAI/nucleus-22B-token-500B",
"base_model:quantized:NucleusAI/nucleus-22B-token-500B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | 2023-11-18T22:25:12Z |
2023-11-18T23:21:32+00:00
| 11 | 2 |
---
base_model: NucleusAI/nucleus-22B-token-500B
language:
- en
license: mit
model_name: Nucleus 22B Token 500B
inference: false
model_creator: NucleusAI
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nucleus 22B Token 500B - GPTQ
- Model creator: [NucleusAI](https://huggingface.co/NucleusAI)
- Original model: [Nucleus 22B Token 500B](https://huggingface.co/NucleusAI/nucleus-22B-token-500B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [NucleusAI's Nucleus 22B Token 500B](https://huggingface.co/NucleusAI/nucleus-22B-token-500B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/nucleus-22B-token-500B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GGUF)
* [NucleusAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NucleusAI/nucleus-22B-token-500B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 11.58 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 11.99 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 13.24 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 9.29 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 22.28 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 10.48 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 22.77 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/nucleus-22B-token-500B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/nucleus-22B-token-500B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `nucleus-22B-token-500B-GPTQ`:
```shell
mkdir nucleus-22B-token-500B-GPTQ
huggingface-cli download TheBloke/nucleus-22B-token-500B-GPTQ --local-dir nucleus-22B-token-500B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir nucleus-22B-token-500B-GPTQ
huggingface-cli download TheBloke/nucleus-22B-token-500B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir nucleus-22B-token-500B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir nucleus-22B-token-500B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/nucleus-22B-token-500B-GPTQ --local-dir nucleus-22B-token-500B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/nucleus-22B-token-500B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/nucleus-22B-token-500B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `nucleus-22B-token-500B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/nucleus-22B-token-500B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/nucleus-22B-token-500B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NucleusAI's Nucleus 22B Token 500B
# 🚀 Nucleus-22B-token-500B
**Nucleus-22B-token-500B is a 22B parameters causal decoder-only model built by Nucleus.AI and trained on 500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) along with curated corpora. It is made available under the MIT license.**
*1T-token model coming soon* 😊.
## What about Nucleus-22B-token-500B?
* **It performs well compared to similar-size open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It is made available under an MIT license**.
* **It is trained by a small team of four passionate for Open Source**
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.**
# Model Card for Nucleus-22B-token-500B
## Model Details
### Model Description
- **Developed by:** NucleusAI;
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English;
- **License:** MIT.
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Nucleus-22B-token-500B is trained on English data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Nucleus-22B-token-500B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Mode
## Training Details
### Training Data
Nucleus-22B-token-500B was trained on 500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), along with other corpora.
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 75% | 200B | massive web crawl |
| Books | 7% | 21B | |
| Code | 7% | 21B | Big Code, CodeNet |
| Technical | 6% | 19B | arXiv |
| Math | 5% | 17B | Mathematica, Khan Academy |
The data was tokenized with the tokenizer similar to Llama-[7B](https://huggingface.co/meta-llama/Llama-2-7b).
### Training Procedure
Nucleus-22B-token-500B was trained on 256 A100 80GB GPUs, using a FSDP
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 2e-4 | 8B tokens warm-up, cosine decay to 1.e-5 |
| Weight decay | 1e-1 | |
| Batch size | 2048 | constant |
| Context length | 2048 | constant |
#### Speeds, Sizes, Times
Training happened in early August 2023 and took about two weeks.
| null |
Non_BioNLP
|
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nucleus 22B Token 500B - GPTQ
- Model creator: [NucleusAI](https://huggingface.co/NucleusAI)
- Original model: [Nucleus 22B Token 500B](https://huggingface.co/NucleusAI/nucleus-22B-token-500B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [NucleusAI's Nucleus 22B Token 500B](https://huggingface.co/NucleusAI/nucleus-22B-token-500B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/nucleus-22B-token-500B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GGUF)
* [NucleusAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NucleusAI/nucleus-22B-token-500B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 11.58 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 11.99 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 13.24 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 9.29 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 22.28 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 10.48 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 2048 | 22.77 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/nucleus-22B-token-500B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/nucleus-22B-token-500B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `nucleus-22B-token-500B-GPTQ`:
```shell
mkdir nucleus-22B-token-500B-GPTQ
huggingface-cli download TheBloke/nucleus-22B-token-500B-GPTQ --local-dir nucleus-22B-token-500B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir nucleus-22B-token-500B-GPTQ
huggingface-cli download TheBloke/nucleus-22B-token-500B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir nucleus-22B-token-500B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir nucleus-22B-token-500B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/nucleus-22B-token-500B-GPTQ --local-dir nucleus-22B-token-500B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/nucleus-22B-token-500B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/nucleus-22B-token-500B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/nucleus-22B-token-500B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `nucleus-22B-token-500B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/nucleus-22B-token-500B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/nucleus-22B-token-500B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NucleusAI's Nucleus 22B Token 500B
# 🚀 Nucleus-22B-token-500B
**Nucleus-22B-token-500B is a 22B parameters causal decoder-only model built by Nucleus.AI and trained on 500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) along with curated corpora. It is made available under the MIT license.**
*1T-token model coming soon* 😊.
## What about Nucleus-22B-token-500B?
* **It performs well compared to similar-size open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It is made available under an MIT license**.
* **It is trained by a small team of four passionate for Open Source**
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.**
# Model Card for Nucleus-22B-token-500B
## Model Details
### Model Description
- **Developed by:** NucleusAI;
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English;
- **License:** MIT.
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Nucleus-22B-token-500B is trained on English data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Nucleus-22B-token-500B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Mode
## Training Details
### Training Data
Nucleus-22B-token-500B was trained on 500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), along with other corpora.
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 75% | 200B | massive web crawl |
| Books | 7% | 21B | |
| Code | 7% | 21B | Big Code, CodeNet |
| Technical | 6% | 19B | arXiv |
| Math | 5% | 17B | Mathematica, Khan Academy |
The data was tokenized with the tokenizer similar to Llama-[7B](https://huggingface.co/meta-llama/Llama-2-7b).
### Training Procedure
Nucleus-22B-token-500B was trained on 256 A100 80GB GPUs, using a FSDP
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 2e-4 | 8B tokens warm-up, cosine decay to 1.e-5 |
| Weight decay | 1e-1 | |
| Batch size | 2048 | constant |
| Context length | 2048 | constant |
#### Speeds, Sizes, Times
Training happened in early August 2023 and took about two weeks.
|
{"base_model": "NucleusAI/nucleus-22B-token-500B", "language": ["en"], "license": "mit", "model_name": "Nucleus 22B Token 500B", "inference": false, "model_creator": "NucleusAI", "model_type": "llama", "prompt_template": "{prompt}\n", "quantized_by": "TheBloke"}
|
task
|
[
"SUMMARIZATION"
] | 45,454 |
DariaaaS/e5-fine-tuned
|
DariaaaS
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:100",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:intfloat/multilingual-e5-base",
"base_model:finetune:intfloat/multilingual-e5-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-29T15:31:57Z |
2024-08-29T15:32:42+00:00
| 8 | 0 |
---
base_model: intfloat/multilingual-e5-base
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:100
- loss:TripletLoss
widget:
- source_sentence: What is the average household income in the city known as "Danzig"?
sentences:
- the most bad aliases refer to MAX(COUNT(bad_alias));
- Greeneville is the city;
- average household income refers to avg_income_per_household; city known as "Danzig"
refers to bad_alias = 'Danzig';
- source_sentence: What is the average household income in the city known as "Danzig"?
sentences:
- '"Berlin, NH" is the CBSA_name'
- '"Puerto Rico" refers to state = ''PR'''
- average household income refers to avg_income_per_household; city known as "Danzig"
refers to bad_alias = 'Danzig';
- source_sentence: What is the country and state of the city named Dalton?
sentences:
- median age over 40 refers to median_age > 40
- DIVIDE(SUBTRACT(SUM(population_2020)), SUM(population_2010)), SUM(population_2010)
as percentage where county = 'ARROYO';
- Dalton is the city;
- source_sentence: What is the country and state of the city named Dalton?
sentences:
- community post office type refers to type = 'Community Post Office'; elevation
above 6000 refers to elevation > 6000;
- Dalton is the city;
- '"Berlin, NH" is the CBSA_name'
- source_sentence: List 10 cities with a median age over 40. Include their zip codes
and area codes.
sentences:
- '"URB San Joaquin" is the bad_alias'
- in California refers to name = 'California' and state = 'CA'; 'Community Post
Office' is the Type
- median age over 40 refers to median_age > 40
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision d13f1b27baf31030b7fd040960d60d909913633f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("DariaaaS/e5-fine-tuned")
# Run inference
sentences = [
'List 10 cities with a median age over 40. Include their zip codes and area codes.',
'median age over 40 refers to median_age > 40',
"in California refers to name = 'California' and state = 'CA'; 'Community Post Office' is the Type",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 100 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 19.8 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 21.3 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 27.57 tokens</li><li>max: 54 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:---------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------|
| <code>Among all the residential areas in Delaware, how many of them implement daylight saving?</code> | <code>"Delaware" is a county; implement daylight savings refers to daylight_saving = 'Yes'</code> | <code>DIVIDE(COUNT(zip_code where type = 'Post Office'), COUNT(zip_code)) as percentage where name = 'California';</code> |
| <code>What is the country and state of the city named Dalton?</code> | <code>Dalton is the city;</code> | <code>average male median age refers to Divide (Sum(male_median_age), Count(county)); 'WINDHAM' is the county</code> |
| <code>Among the residential areas with the bad alias "Internal Revenue Service", how many of them are in the Eastern time zone?</code> | <code>"Internal Revenue Service" is the bad_alias; in Eastern time zone refers to time_zone = 'Eastern'</code> | <code>"Berlin, NH" is the CBSA_name</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.4.0+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision d13f1b27baf31030b7fd040960d60d909913633f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("DariaaaS/e5-fine-tuned")
# Run inference
sentences = [
'List 10 cities with a median age over 40. Include their zip codes and area codes.',
'median age over 40 refers to median_age > 40',
"in California refers to name = 'California' and state = 'CA'; 'Community Post Office' is the Type",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 100 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 19.8 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 21.3 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 27.57 tokens</li><li>max: 54 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:---------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------|
| <code>Among all the residential areas in Delaware, how many of them implement daylight saving?</code> | <code>"Delaware" is a county; implement daylight savings refers to daylight_saving = 'Yes'</code> | <code>DIVIDE(COUNT(zip_code where type = 'Post Office'), COUNT(zip_code)) as percentage where name = 'California';</code> |
| <code>What is the country and state of the city named Dalton?</code> | <code>Dalton is the city;</code> | <code>average male median age refers to Divide (Sum(male_median_age), Count(county)); 'WINDHAM' is the county</code> |
| <code>Among the residential areas with the bad alias "Internal Revenue Service", how many of them are in the Eastern time zone?</code> | <code>"Internal Revenue Service" is the bad_alias; in Eastern time zone refers to time_zone = 'Eastern'</code> | <code>"Berlin, NH" is the CBSA_name</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.4.0+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "intfloat/multilingual-e5-base", "datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:100", "loss:TripletLoss"], "widget": [{"source_sentence": "What is the average household income in the city known as \"Danzig\"?", "sentences": ["the most bad aliases refer to MAX(COUNT(bad_alias));", "Greeneville is the city;", "average household income refers to avg_income_per_household; city known as \"Danzig\" refers to bad_alias = 'Danzig';"]}, {"source_sentence": "What is the average household income in the city known as \"Danzig\"?", "sentences": ["\"Berlin, NH\" is the CBSA_name", "\"Puerto Rico\" refers to state = 'PR'", "average household income refers to avg_income_per_household; city known as \"Danzig\" refers to bad_alias = 'Danzig';"]}, {"source_sentence": "What is the country and state of the city named Dalton?", "sentences": ["median age over 40 refers to median_age > 40", "DIVIDE(SUBTRACT(SUM(population_2020)), SUM(population_2010)), SUM(population_2010) as percentage where county = 'ARROYO';", "Dalton is the city;"]}, {"source_sentence": "What is the country and state of the city named Dalton?", "sentences": ["community post office type refers to type = 'Community Post Office'; elevation above 6000 refers to elevation > 6000;", "Dalton is the city;", "\"Berlin, NH\" is the CBSA_name"]}, {"source_sentence": "List 10 cities with a median age over 40. Include their zip codes and area codes.", "sentences": ["\"URB San Joaquin\" is the bad_alias", "in California refers to name = 'California' and state = 'CA'; 'Community Post Office' is the Type", "median age over 40 refers to median_age > 40"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,455 |
lincoln/mbart-mlsum-automatic-summarization
|
lincoln
|
summarization
|
[
"transformers",
"pytorch",
"tf",
"mbart",
"text2text-generation",
"summarization",
"bart",
"fr",
"dataset:MLSUM",
"arxiv:2004.14900",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-09-07T08:21:55+00:00
| 174 | 7 |
---
datasets:
- MLSUM
language:
- fr
license: mit
pipeline_tag: summarization
tags:
- summarization
- mbart
- bart
widget:
- text: « La veille de l’ouverture, je vais faire venir un coach pour les salariés
qui reprendront le travail. Cela va me coûter 300 euros, mais après des mois d’oisiveté
obligatoire, la reprise n’est pas simple. Certains sont au chômage partiel depuis
mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet, dans
le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît.
Emmanuel Macron a, en effet, donné le feu vert pour un premier accueil des clients
en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre
ce jour-là pour fêter l’événement. Il lui reste toutefois à construire sa terrasse.
Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais,
finalement, le rendez-vous a été décalé. Pour l’instant, le tas de bois est entreposé
dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre
2020, quand le couperet de la fermeture administrative est tombé.M. Fontaine,
président de l’Association française des maîtres restaurateurs, ne manquera pas
de concurrents prêts à profiter de ce premier temps de réouverture des bars et
restaurants. Même si le couvre-feu limite le service à 21 heures. D’autant que
la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères
installées en 2020 et leur gratuité jusqu’à la fin de l’été.
---
# Résumé automatique d'article de presses
Ce modèles est basé sur le modèle [`facebook/mbart-large-50`](https://huggingface.co/facebook/mbart-large-50) et été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM. L'hypothèse à été faite que les chapeaux des articles faisaient de bon résumés de référence.
## Entrainement
Nous avons testé deux architecture de modèles (T5 et BART) avec des textes en entrée de 512 ou 1024 tokens. Finallement c'est le modèle BART avec 512 tokens qui à été retenu.
Il a été entrainé sur 2 epochs (~700K articles) sur une Tesla V100 (32 heures d'entrainement).
## Résultats

Nous avons comparé notre modèle (`mbart-large-512-full` sur le graphique) à deux références:
* MBERT qui correspond aux performances du modèle entrainé par l'équipe à l'origine de la base d'articles MLSUM
* Barthez qui est un autre modèle basé sur des articles de presses issus de la base de données OrangeSum
On voit que le score de novelty (cf papier MLSUM) de notre modèle n'est pas encore comparable à ces deux références et encore moins à une production humaine néanmoins les résumés générés sont dans l'ensemble de bonne qualité.
## Utilisation
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import SummarizationPipeline
model_name = 'lincoln/mbart-mlsum-automatic-summarization'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
nlp = SummarizationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("""
« La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail.
Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple.
Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet,
dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet,
donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement.
Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé.
Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020,
quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs,
ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures.
D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été.
""")
```
## Citation
```bibtex
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano},
year={2020},
eprint={2004.14900},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
# Résumé automatique d'article de presses
Ce modèles est basé sur le modèle [`facebook/mbart-large-50`](https://huggingface.co/facebook/mbart-large-50) et été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM. L'hypothèse à été faite que les chapeaux des articles faisaient de bon résumés de référence.
## Entrainement
Nous avons testé deux architecture de modèles (T5 et BART) avec des textes en entrée de 512 ou 1024 tokens. Finallement c'est le modèle BART avec 512 tokens qui à été retenu.
Il a été entrainé sur 2 epochs (~700K articles) sur une Tesla V100 (32 heures d'entrainement).
## Résultats

Nous avons comparé notre modèle (`mbart-large-512-full` sur le graphique) à deux références:
* MBERT qui correspond aux performances du modèle entrainé par l'équipe à l'origine de la base d'articles MLSUM
* Barthez qui est un autre modèle basé sur des articles de presses issus de la base de données OrangeSum
On voit que le score de novelty (cf papier MLSUM) de notre modèle n'est pas encore comparable à ces deux références et encore moins à une production humaine néanmoins les résumés générés sont dans l'ensemble de bonne qualité.
## Utilisation
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import SummarizationPipeline
model_name = 'lincoln/mbart-mlsum-automatic-summarization'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
nlp = SummarizationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("""
« La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail.
Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple.
Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet,
dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet,
donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement.
Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé.
Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020,
quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs,
ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures.
D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été.
""")
```
## Citation
```bibtex
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano},
year={2020},
eprint={2004.14900},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"datasets": ["MLSUM"], "language": ["fr"], "license": "mit", "pipeline_tag": "summarization", "tags": ["summarization", "mbart", "bart"], "widget": [{"text": "« La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail. Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple. Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet, dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet, donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement. Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé. Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020, quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs, ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures. D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été."}]}
|
task
|
[
"SUMMARIZATION"
] | 45,456 |
adisrini11/AIE-Assessment
|
adisrini11
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-18T21:32:35Z |
2023-07-05T16:39:39+00:00
| 10 | 0 |
---
datasets:
- tweet_eval
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: AIE-Assessment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: test
args: emotion
metrics:
- type: accuracy
value: 0.800844475721323
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AIE-Assessment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5687
- Accuracy: 0.8008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 204 | 0.6383 | 0.7910 |
| No log | 2.0 | 408 | 0.5687 | 0.8008 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AIE-Assessment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5687
- Accuracy: 0.8008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 204 | 0.6383 | 0.7910 |
| No log | 2.0 | 408 | 0.5687 | 0.8008 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
{"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "AIE-Assessment", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "test", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.800844475721323, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,457 |
kyujinpy/KO-Platypus2-13B
|
kyujinpy
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-08-20T12:55:06Z |
2023-10-19T13:29:09+00:00
| 2,204 | 7 |
---
datasets:
- kyujinpy/KOpen-platypus
language:
- en
- ko
library_name: transformers
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **Ko-Platypus2-13B**

## Model Details
**More detail repo(Github): [KO-Platypus](https://github.com/Marker-Inc-Korea/KO-Platypus)**
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
KO-Platypus2-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
I use [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus).
It is high-quality korean translation dataset about [open-platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
I use A100 GPU 40GB and COLAB, when trianing.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).

| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| KO-Platypus2-13B(ours) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 |
| [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 |
| [MarkrAI/kyujin-CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 |
| [momo/polyglot-ko-12.8b-Chat-QLoRA-Merge](https://huggingface.co/momo/polyglot-ko-12.8b-Chat-QLoRA-Merge) | 45.71 | 35.49 | 49.93 | 25.97 | 39.43 | 77.70 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 45.62 | 38.05 | 49.63 | 34.68 | 37.69 | 68.08 |
> Compare with Top 4 SOTA models. (update: 10/06)
---
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/KO-Platypus2-13B"
CoT-llama = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [kyujinpy/KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B)
---
| null |
Non_BioNLP
|
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **Ko-Platypus2-13B**

## Model Details
**More detail repo(Github): [KO-Platypus](https://github.com/Marker-Inc-Korea/KO-Platypus)**
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
KO-Platypus2-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
I use [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus).
It is high-quality korean translation dataset about [open-platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
I use A100 GPU 40GB and COLAB, when trianing.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).

| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| KO-Platypus2-13B(ours) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 |
| [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 |
| [MarkrAI/kyujin-CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 |
| [momo/polyglot-ko-12.8b-Chat-QLoRA-Merge](https://huggingface.co/momo/polyglot-ko-12.8b-Chat-QLoRA-Merge) | 45.71 | 35.49 | 49.93 | 25.97 | 39.43 | 77.70 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 45.62 | 38.05 | 49.63 | 34.68 | 37.69 | 68.08 |
> Compare with Top 4 SOTA models. (update: 10/06)
---
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/KO-Platypus2-13B"
CoT-llama = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [kyujinpy/KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B)
---
|
{"datasets": ["kyujinpy/KOpen-platypus"], "language": ["en", "ko"], "library_name": "transformers", "license": "cc-by-nc-sa-4.0", "pipeline_tag": "text-generation"}
|
task
|
[
"TRANSLATION"
] | 45,458 |
gokulsrinivasagan/distilbert_lda_50_v1_mnli
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_50_v1",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_50_v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-26T18:13:27Z |
2024-12-04T12:54:04+00:00
| 5 | 0 |
---
base_model: gokulsrinivasagan/distilbert_lda_50_v1
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert_lda_50_v1_mnli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- type: accuracy
value: 0.7355573637103336
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_50_v1_mnli
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_50_v1](https://huggingface.co/gokulsrinivasagan/distilbert_lda_50_v1) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6377
- Accuracy: 0.7356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8292 | 1.0 | 1534 | 0.7243 | 0.6906 |
| 0.6692 | 2.0 | 3068 | 0.6628 | 0.7185 |
| 0.5756 | 3.0 | 4602 | 0.6567 | 0.7336 |
| 0.4939 | 4.0 | 6136 | 0.6776 | 0.7320 |
| 0.414 | 5.0 | 7670 | 0.6913 | 0.7373 |
| 0.3422 | 6.0 | 9204 | 0.7809 | 0.7314 |
| 0.2807 | 7.0 | 10738 | 0.8294 | 0.7350 |
| 0.2292 | 8.0 | 12272 | 0.9725 | 0.7280 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_50_v1_mnli
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_50_v1](https://huggingface.co/gokulsrinivasagan/distilbert_lda_50_v1) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6377
- Accuracy: 0.7356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8292 | 1.0 | 1534 | 0.7243 | 0.6906 |
| 0.6692 | 2.0 | 3068 | 0.6628 | 0.7185 |
| 0.5756 | 3.0 | 4602 | 0.6567 | 0.7336 |
| 0.4939 | 4.0 | 6136 | 0.6776 | 0.7320 |
| 0.414 | 5.0 | 7670 | 0.6913 | 0.7373 |
| 0.3422 | 6.0 | 9204 | 0.7809 | 0.7314 |
| 0.2807 | 7.0 | 10738 | 0.8294 | 0.7350 |
| 0.2292 | 8.0 | 12272 | 0.9725 | 0.7280 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/distilbert_lda_50_v1", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_lda_50_v1_mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.7355573637103336, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,459 |
saptarshidatta96/finetuning-sentiment-model-3000-samples
|
saptarshidatta96
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-02-25T15:20:10+00:00
| 115 | 0 |
---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- type: accuracy
value: 0.8733333333333333
name: Accuracy
- type: f1
value: 0.879746835443038
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3209
- Accuracy: 0.8733
- F1: 0.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3209
- Accuracy: 0.8733
- F1: 0.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.8733333333333333, "name": "Accuracy"}, {"type": "f1", "value": 0.879746835443038, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,460 |
MaLA-LM/lucky52-bloom-7b1-no-46
|
MaLA-LM
|
text-generation
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"multilingual",
"dataset:MBZUAI/Bactrian-X",
"arxiv:2404.04850",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-04T14:40:14Z |
2025-04-08T17:00:24+00:00
| 7 | 0 |
---
datasets:
- MBZUAI/Bactrian-X
language:
- multilingual
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details.
* Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian, Kazakh, Khmer, Korean, Lithuanian, Latvian, Macedonian, Malayalam, Mongolian, Marathi, Burmese, Nepali, Dutch, Polish, Pashto, Portuguese, Romanian, Russian, Sinhala, Slovenian, Swedish, Swahili, Tamil, Telugu, Thai
* Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka, kk, km, ko, lt, lv, mk, ml, mn, mr, my, ne, nl, pl, ps, pt, ro, ru, si, sl, sv, sw, ta, te, th
* Training method: full-parameter fine-tuning.
### Usage
The model checkpoint should be loaded using `transformers` library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-46")
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-46")
```
### Citation
```
@inproceedings{ji2025lucky52,
title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM},
author={Shaoxiong Ji and Pinzhen Chen},
year={2025},
booktitle={Proceedings of COLING},
url={https://arxiv.org/abs/2404.04850},
}
```
| null |
Non_BioNLP
|
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details.
* Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian, Kazakh, Khmer, Korean, Lithuanian, Latvian, Macedonian, Malayalam, Mongolian, Marathi, Burmese, Nepali, Dutch, Polish, Pashto, Portuguese, Romanian, Russian, Sinhala, Slovenian, Swedish, Swahili, Tamil, Telugu, Thai
* Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka, kk, km, ko, lt, lv, mk, ml, mn, mr, my, ne, nl, pl, ps, pt, ro, ru, si, sl, sv, sw, ta, te, th
* Training method: full-parameter fine-tuning.
### Usage
The model checkpoint should be loaded using `transformers` library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-46")
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-46")
```
### Citation
```
@inproceedings{ji2025lucky52,
title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM},
author={Shaoxiong Ji and Pinzhen Chen},
year={2025},
booktitle={Proceedings of COLING},
url={https://arxiv.org/abs/2404.04850},
}
```
|
{"datasets": ["MBZUAI/Bactrian-X"], "language": ["multilingual"], "library_name": "transformers", "license": "cc-by-nc-4.0", "pipeline_tag": "text-generation", "tags": ["generation", "question answering", "instruction tuning"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,461 |
HPLT/sft-fpft-cs-bloom-3b
|
HPLT
|
text-generation
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"cs",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-04T17:27:03Z |
2025-04-08T16:58:27+00:00
| 8 | 0 |
---
language:
- cs
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bloom-3b](https://huggingface.co/bloom-3b)
* Instruction tuning language: Czech
* Training method: full-parameter fine-tuning.
* Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded using `transformers` library.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
| null |
Non_BioNLP
|
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bloom-3b](https://huggingface.co/bloom-3b)
* Instruction tuning language: Czech
* Training method: full-parameter fine-tuning.
* Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded using `transformers` library.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
{"language": ["cs"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,462 |
martomor/distilbert-base-uncased-distilled-clinc
|
martomor
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-30T11:39:49Z |
2024-02-05T15:22:13+00:00
| 6 | 0 |
---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- type: accuracy
value: 0.9358064516129032
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0480
- Accuracy: 0.9358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6591 | 1.0 | 318 | 0.3575 | 0.6732 |
| 0.2806 | 2.0 | 636 | 0.1534 | 0.8606 |
| 0.1519 | 3.0 | 954 | 0.0923 | 0.9077 |
| 0.1066 | 4.0 | 1272 | 0.0699 | 0.92 |
| 0.0859 | 5.0 | 1590 | 0.0600 | 0.9248 |
| 0.0755 | 6.0 | 1908 | 0.0545 | 0.9294 |
| 0.0691 | 7.0 | 2226 | 0.0515 | 0.9297 |
| 0.0652 | 8.0 | 2544 | 0.0495 | 0.9345 |
| 0.0628 | 9.0 | 2862 | 0.0484 | 0.9348 |
| 0.0618 | 10.0 | 3180 | 0.0480 | 0.9358 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0480
- Accuracy: 0.9358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6591 | 1.0 | 318 | 0.3575 | 0.6732 |
| 0.2806 | 2.0 | 636 | 0.1534 | 0.8606 |
| 0.1519 | 3.0 | 954 | 0.0923 | 0.9077 |
| 0.1066 | 4.0 | 1272 | 0.0699 | 0.92 |
| 0.0859 | 5.0 | 1590 | 0.0600 | 0.9248 |
| 0.0755 | 6.0 | 1908 | 0.0545 | 0.9294 |
| 0.0691 | 7.0 | 2226 | 0.0515 | 0.9297 |
| 0.0652 | 8.0 | 2544 | 0.0495 | 0.9345 |
| 0.0628 | 9.0 | 2862 | 0.0484 | 0.9348 |
| 0.0618 | 10.0 | 3180 | 0.0480 | 0.9358 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.1
|
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9358064516129032, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,463 |
Mingmingchenxin/summarization
|
Mingmingchenxin
|
text2text-generation
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-09T23:10:04Z |
2025-02-09T23:11:52+00:00
| 7 | 0 |
---
base_model: google-t5/t5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7096
- Rouge1: 0.1409
- Rouge2: 0.0444
- Rougel: 0.116
- Rougelsum: 0.1161
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 31 | 3.1664 | 0.1398 | 0.0448 | 0.1151 | 0.115 | 20.0 |
| No log | 2.0 | 62 | 2.8497 | 0.1339 | 0.0411 | 0.1104 | 0.1106 | 20.0 |
| No log | 3.0 | 93 | 2.7372 | 0.1392 | 0.0442 | 0.1146 | 0.1146 | 20.0 |
| No log | 4.0 | 124 | 2.7096 | 0.1409 | 0.0444 | 0.116 | 0.1161 | 20.0 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7096
- Rouge1: 0.1409
- Rouge2: 0.0444
- Rougel: 0.116
- Rougelsum: 0.1161
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 31 | 3.1664 | 0.1398 | 0.0448 | 0.1151 | 0.115 | 20.0 |
| No log | 2.0 | 62 | 2.8497 | 0.1339 | 0.0411 | 0.1104 | 0.1106 | 20.0 |
| No log | 3.0 | 93 | 2.7372 | 0.1392 | 0.0442 | 0.1146 | 0.1146 | 20.0 |
| No log | 4.0 | 124 | 2.7096 | 0.1409 | 0.0444 | 0.116 | 0.1161 | 20.0 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
{"base_model": "google-t5/t5-small", "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "summarization", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,464 |
Xenova/opus-mt-da-de
|
Xenova
|
translation
|
[
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-da-de",
"base_model:quantized:Helsinki-NLP/opus-mt-da-de",
"region:us"
] | 2023-08-08T14:52:14Z |
2024-10-08T13:40:22+00:00
| 60 | 0 |
---
base_model: Helsinki-NLP/opus-mt-da-de
library_name: transformers.js
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-da-de with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
| null |
Non_BioNLP
| ERROR: type should be string, got "\nhttps://huggingface.co/Helsinki-NLP/opus-mt-da-de with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)." |
{"base_model": "Helsinki-NLP/opus-mt-da-de", "library_name": "transformers.js", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 45,465 |
domenicrosati/t5-finetuned-parasci
|
domenicrosati
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-04-20T20:15:06Z |
2022-04-24T14:51:43+00:00
| 33 | 1 |
---
license: apache-2.0
metrics:
- bleu
tags:
- summarization
- generated_from_trainer
model-index:
- name: t5-finetuned-parasci
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-finetuned-parasci
This model is a fine-tuned version of [domenicrosati/t5-finetuned-parasci](https://huggingface.co/domenicrosati/t5-finetuned-parasci) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0845
- Bleu: 19.5623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-finetuned-parasci
This model is a fine-tuned version of [domenicrosati/t5-finetuned-parasci](https://huggingface.co/domenicrosati/t5-finetuned-parasci) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0845
- Bleu: 19.5623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
{"license": "apache-2.0", "metrics": ["bleu"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "t5-finetuned-parasci", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,467 |
dbourget/philai-embeddings-2.0
|
dbourget
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9504",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-20T14:41:17Z |
2024-09-24T14:53:03+00:00
| 22 | 0 |
---
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:9504
- loss:TripletLoss
widget:
- source_sentence: cap product
sentences:
- method of adjoining a chain of degree p with a co-chain of degree q, where q is
less than or equal to p, to form a composite chain of degree p-q
- 'Ontology '
- hat commodity
- source_sentence: cognitivism
sentences:
- supporting cognitive science
- study of changes in organisms caused by modification of gene expression rather
than alteration of the genetic code
- 'the idea that mind works like an algorithmic symbol manipulation '
- source_sentence: doxastic voluntarism
sentences:
- Land surrounded by water
- belief one is free
- the ability to will beliefs
- source_sentence: conceptual role
sentences:
- concept
- inferential role
- 'Theory of knowledge '
- source_sentence: scientific revolutions
sentences:
- scientific realism
- Universal moral principles govern legal systems
- paradigm shifts
model-index:
- name: SentenceTransformer
results:
- task:
type: triplet
name: Triplet
dataset:
name: beatai dev
type: beatai-dev
metrics:
- type: cosine_accuracy
value: 0.8080808080808081
name: Cosine Accuracy
- type: dot_accuracy
value: 0.28114478114478114
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.8316498316498316
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.8249158249158249
name: Euclidean Accuracy
- type: max_accuracy
value: 0.8316498316498316
name: Max Accuracy
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
1. bert-base-uncased was pretrained on a large corpus of open access philosophy text.
2. This model was further trained using TSDAE on a subset of sentences from this corpus for 6 epochs.
3. Resulting model was finetuned using cosine similarity objective on the "philsim" private dataset.
4. Resulting model was finetuned using cosine similarity objective on the beatai-philosophy dataset.
Model internal name: pb-small-10e-tsdae6e-philsim-cosine-6e-beatai-20e
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("dbourget/philai-embeddings-2.0")
# Run inference
sentences = [
'scientific revolutions',
'paradigm shifts',
'scientific realism',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `beatai-dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy | 0.8081 |
| dot_accuracy | 0.2811 |
| manhattan_accuracy | 0.8316 |
| euclidean_accuracy | 0.8249 |
| **max_accuracy** | **0.8316** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 138
- `per_device_eval_batch_size`: 138
- `learning_rate`: 2e-06
- `num_train_epochs`: 10
- `lr_scheduler_type`: constant
- `bf16`: True
- `dataloader_drop_last`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 138
- `per_device_eval_batch_size`: 138
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-06
- `weight_decay`: 0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: constant
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: 2
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | beatai-dev_max_accuracy |
|:------:|:----:|:-------------:|:------:|:-----------------------:|
| 0 | 0 | - | - | 0.8072 |
| 0.1471 | 10 | 1.8573 | - | - |
| 0.2941 | 20 | 1.8196 | - | - |
| 0.4412 | 30 | 1.8594 | - | - |
| 0.5882 | 40 | 1.8581 | - | - |
| 0.7353 | 50 | 1.8766 | 2.3603 | 0.8047 |
| 0.8824 | 60 | 1.8596 | - | - |
| 1.0294 | 70 | 1.6816 | - | - |
| 1.1765 | 80 | 1.7564 | - | - |
| 1.3235 | 90 | 1.7191 | - | - |
| 1.4706 | 100 | 1.6521 | 2.3296 | 0.8064 |
| 1.6176 | 110 | 1.7054 | - | - |
| 1.7647 | 120 | 1.6895 | - | - |
| 1.9118 | 130 | 1.6724 | - | - |
| 2.0588 | 140 | 1.6369 | - | - |
| 2.2059 | 150 | 1.705 | 2.2941 | 0.8123 |
| 2.3529 | 160 | 1.8329 | - | - |
| 2.5 | 170 | 1.6071 | - | - |
| 2.6471 | 180 | 1.5157 | - | - |
| 2.7941 | 190 | 1.624 | - | - |
| 2.9412 | 200 | 1.6185 | 2.2668 | 0.8140 |
| 3.0882 | 210 | 1.6259 | - | - |
| 3.2353 | 220 | 1.5749 | - | - |
| 3.3824 | 230 | 1.5426 | - | - |
| 3.5294 | 240 | 1.5522 | - | - |
| 3.6765 | 250 | 1.5141 | 2.2498 | 0.8157 |
| 3.8235 | 260 | 1.5215 | - | - |
| 3.9706 | 270 | 1.4983 | - | - |
| 4.1176 | 280 | 1.4819 | - | - |
| 4.2647 | 290 | 1.4552 | - | - |
| 4.4118 | 300 | 1.5597 | 2.2226 | 0.8199 |
| 4.5588 | 310 | 1.3983 | - | - |
| 4.7059 | 320 | 1.5386 | - | - |
| 4.8529 | 330 | 1.4541 | - | - |
| 5.0 | 340 | 1.4097 | - | - |
| 5.1471 | 350 | 1.3741 | 2.2129 | 0.8207 |
| 5.2941 | 360 | 1.3909 | - | - |
| 5.4412 | 370 | 1.4116 | - | - |
| 5.5882 | 380 | 1.52 | - | - |
| 5.7353 | 390 | 1.3644 | - | - |
| 5.8824 | 400 | 1.3016 | 2.1699 | 0.8266 |
| 6.0294 | 410 | 1.4435 | - | - |
| 6.1765 | 420 | 1.3112 | - | - |
| 6.3235 | 430 | 1.4056 | - | - |
| 6.4706 | 440 | 1.4541 | - | - |
| 6.6176 | 450 | 1.3312 | 2.1486 | 0.8224 |
| 6.7647 | 460 | 1.2879 | - | - |
| 6.9118 | 470 | 1.227 | - | - |
| 7.0588 | 480 | 1.3834 | - | - |
| 7.2059 | 490 | 1.3242 | - | - |
| 7.3529 | 500 | 1.3756 | 2.1507 | 0.8274 |
| 7.5 | 510 | 1.2872 | - | - |
| 7.6471 | 520 | 1.3288 | - | - |
| 7.7941 | 530 | 1.2689 | - | - |
| 7.9412 | 540 | 1.3102 | - | - |
| 8.0882 | 550 | 1.2929 | 2.1355 | 0.8207 |
| 8.2353 | 560 | 1.2511 | - | - |
| 8.3824 | 570 | 1.1849 | - | - |
| 8.5294 | 580 | 1.2774 | - | - |
| 8.6765 | 590 | 1.1923 | - | - |
| 8.8235 | 600 | 1.1927 | 2.1111 | 0.8283 |
| 8.9706 | 610 | 1.2556 | - | - |
| 9.1176 | 620 | 1.2767 | - | - |
| 9.2647 | 630 | 1.1082 | - | - |
| 9.4118 | 640 | 1.3077 | - | - |
| 9.5588 | 650 | 1.1435 | 2.0922 | 0.8316 |
| 9.7059 | 660 | 1.1888 | - | - |
| 9.8529 | 670 | 1.2123 | - | - |
| 10.0 | 680 | 1.2554 | - | - |
### Framework Versions
- Python: 3.8.18
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 1.13.1+cu117
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
1. bert-base-uncased was pretrained on a large corpus of open access philosophy text.
2. This model was further trained using TSDAE on a subset of sentences from this corpus for 6 epochs.
3. Resulting model was finetuned using cosine similarity objective on the "philsim" private dataset.
4. Resulting model was finetuned using cosine similarity objective on the beatai-philosophy dataset.
Model internal name: pb-small-10e-tsdae6e-philsim-cosine-6e-beatai-20e
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("dbourget/philai-embeddings-2.0")
# Run inference
sentences = [
'scientific revolutions',
'paradigm shifts',
'scientific realism',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `beatai-dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy | 0.8081 |
| dot_accuracy | 0.2811 |
| manhattan_accuracy | 0.8316 |
| euclidean_accuracy | 0.8249 |
| **max_accuracy** | **0.8316** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 138
- `per_device_eval_batch_size`: 138
- `learning_rate`: 2e-06
- `num_train_epochs`: 10
- `lr_scheduler_type`: constant
- `bf16`: True
- `dataloader_drop_last`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 138
- `per_device_eval_batch_size`: 138
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-06
- `weight_decay`: 0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: constant
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: 2
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | beatai-dev_max_accuracy |
|:------:|:----:|:-------------:|:------:|:-----------------------:|
| 0 | 0 | - | - | 0.8072 |
| 0.1471 | 10 | 1.8573 | - | - |
| 0.2941 | 20 | 1.8196 | - | - |
| 0.4412 | 30 | 1.8594 | - | - |
| 0.5882 | 40 | 1.8581 | - | - |
| 0.7353 | 50 | 1.8766 | 2.3603 | 0.8047 |
| 0.8824 | 60 | 1.8596 | - | - |
| 1.0294 | 70 | 1.6816 | - | - |
| 1.1765 | 80 | 1.7564 | - | - |
| 1.3235 | 90 | 1.7191 | - | - |
| 1.4706 | 100 | 1.6521 | 2.3296 | 0.8064 |
| 1.6176 | 110 | 1.7054 | - | - |
| 1.7647 | 120 | 1.6895 | - | - |
| 1.9118 | 130 | 1.6724 | - | - |
| 2.0588 | 140 | 1.6369 | - | - |
| 2.2059 | 150 | 1.705 | 2.2941 | 0.8123 |
| 2.3529 | 160 | 1.8329 | - | - |
| 2.5 | 170 | 1.6071 | - | - |
| 2.6471 | 180 | 1.5157 | - | - |
| 2.7941 | 190 | 1.624 | - | - |
| 2.9412 | 200 | 1.6185 | 2.2668 | 0.8140 |
| 3.0882 | 210 | 1.6259 | - | - |
| 3.2353 | 220 | 1.5749 | - | - |
| 3.3824 | 230 | 1.5426 | - | - |
| 3.5294 | 240 | 1.5522 | - | - |
| 3.6765 | 250 | 1.5141 | 2.2498 | 0.8157 |
| 3.8235 | 260 | 1.5215 | - | - |
| 3.9706 | 270 | 1.4983 | - | - |
| 4.1176 | 280 | 1.4819 | - | - |
| 4.2647 | 290 | 1.4552 | - | - |
| 4.4118 | 300 | 1.5597 | 2.2226 | 0.8199 |
| 4.5588 | 310 | 1.3983 | - | - |
| 4.7059 | 320 | 1.5386 | - | - |
| 4.8529 | 330 | 1.4541 | - | - |
| 5.0 | 340 | 1.4097 | - | - |
| 5.1471 | 350 | 1.3741 | 2.2129 | 0.8207 |
| 5.2941 | 360 | 1.3909 | - | - |
| 5.4412 | 370 | 1.4116 | - | - |
| 5.5882 | 380 | 1.52 | - | - |
| 5.7353 | 390 | 1.3644 | - | - |
| 5.8824 | 400 | 1.3016 | 2.1699 | 0.8266 |
| 6.0294 | 410 | 1.4435 | - | - |
| 6.1765 | 420 | 1.3112 | - | - |
| 6.3235 | 430 | 1.4056 | - | - |
| 6.4706 | 440 | 1.4541 | - | - |
| 6.6176 | 450 | 1.3312 | 2.1486 | 0.8224 |
| 6.7647 | 460 | 1.2879 | - | - |
| 6.9118 | 470 | 1.227 | - | - |
| 7.0588 | 480 | 1.3834 | - | - |
| 7.2059 | 490 | 1.3242 | - | - |
| 7.3529 | 500 | 1.3756 | 2.1507 | 0.8274 |
| 7.5 | 510 | 1.2872 | - | - |
| 7.6471 | 520 | 1.3288 | - | - |
| 7.7941 | 530 | 1.2689 | - | - |
| 7.9412 | 540 | 1.3102 | - | - |
| 8.0882 | 550 | 1.2929 | 2.1355 | 0.8207 |
| 8.2353 | 560 | 1.2511 | - | - |
| 8.3824 | 570 | 1.1849 | - | - |
| 8.5294 | 580 | 1.2774 | - | - |
| 8.6765 | 590 | 1.1923 | - | - |
| 8.8235 | 600 | 1.1927 | 2.1111 | 0.8283 |
| 8.9706 | 610 | 1.2556 | - | - |
| 9.1176 | 620 | 1.2767 | - | - |
| 9.2647 | 630 | 1.1082 | - | - |
| 9.4118 | 640 | 1.3077 | - | - |
| 9.5588 | 650 | 1.1435 | 2.0922 | 0.8316 |
| 9.7059 | 660 | 1.1888 | - | - |
| 9.8529 | 670 | 1.2123 | - | - |
| 10.0 | 680 | 1.2554 | - | - |
### Framework Versions
- Python: 3.8.18
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 1.13.1+cu117
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"library_name": "sentence-transformers", "metrics": ["cosine_accuracy", "dot_accuracy", "manhattan_accuracy", "euclidean_accuracy", "max_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:9504", "loss:TripletLoss"], "widget": [{"source_sentence": "cap product", "sentences": ["method of adjoining a chain of degree p with a co-chain of degree q, where q is less than or equal to p, to form a composite chain of degree p-q", "Ontology ", "hat commodity"]}, {"source_sentence": "cognitivism", "sentences": ["supporting cognitive science", "study of changes in organisms caused by modification of gene expression rather than alteration of the genetic code", "the idea that mind works like an algorithmic symbol manipulation "]}, {"source_sentence": "doxastic voluntarism", "sentences": ["Land surrounded by water", "belief one is free", "the ability to will beliefs"]}, {"source_sentence": "conceptual role", "sentences": ["concept", "inferential role", "Theory of knowledge "]}, {"source_sentence": "scientific revolutions", "sentences": ["scientific realism", "Universal moral principles govern legal systems", "paradigm shifts"]}], "model-index": [{"name": "SentenceTransformer", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "beatai dev", "type": "beatai-dev"}, "metrics": [{"type": "cosine_accuracy", "value": 0.8080808080808081, "name": "Cosine Accuracy"}, {"type": "dot_accuracy", "value": 0.28114478114478114, "name": "Dot Accuracy"}, {"type": "manhattan_accuracy", "value": 0.8316498316498316, "name": "Manhattan Accuracy"}, {"type": "euclidean_accuracy", "value": 0.8249158249158249, "name": "Euclidean Accuracy"}, {"type": "max_accuracy", "value": 0.8316498316498316, "name": "Max Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,468 |
OramaSearch/query-translator-mini
|
OramaSearch
| null |
[
"peft",
"safetensors",
"en",
"dataset:OramaSearch/nlp-to-query-small",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:agpl-3.0",
"region:us"
] | 2025-01-03T15:57:56Z |
2025-01-08T13:57:05+00:00
| 15 | 4 |
---
base_model: Qwen/Qwen2.5-7B
datasets:
- OramaSearch/nlp-to-query-small
language:
- en
library_name: peft
license: agpl-3.0
---
# Query Translator Mini
This repository contains a fine-tuned version of Qwen 2.5 7B model specialized in translating natural language queries into structured Orama search queries.
The model uses PEFT with LoRA to maintain efficiency while achieving high performance.
## Model Details
### Model Description
The Query Translator Mini model is designed to convert natural language queries into structured JSON queries compatible with the Orama search engine.
It understands various data types and query operators, making it versatile for different search scenarios.
### Key Features
- Translates natural language to structured Orama queries
- Supports multiple field types: string, number, boolean, enum, and arrays
- Handles complex query operators: `gt`, `gte`, `lt`, `lte`, `eq`, `between`, `containsAll`
- Supports nested properties with dot notation
- Works with both full-text search and filtered queries
## Usage
```python
import json, torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
SYSTEM_PROMPT = """
You are a tool used to generate synthetic data of Orama queries. Orama is a full-text, vector, and hybrid search engine.
Let me show you what you need to do with some examples.
Example:
- Query: `"What are the red wines that cost less than 20 dollars?"`
- Schema: `{ "name": "string", "content": "string", "price": "number", "tags": "enum[]" }`
- Generated query: `{ "term": "", "where": { "tags": { "containsAll": ["red", "wine"] }, "price": { "lt": 20 } } }`
Another example:
- Query: `"Show me 5 prosecco wines good for aperitif"`
- Schema: `{ "name": "string", "content": "string", "price": "number", "tags": "enum[]" }`
- Generated query: `{ "term": "prosecco aperitif", "limit": 5 }`
One last example:
- Query: `"Show me some wine reviews with a score greater than 4.5 and less than 5.0."`
- Schema: `{ "title": "string", "content": "string", "reviews": { "score": "number", "text": "string" } }]`
- Generated query: `{ "term": "", "where": { "reviews.score": { "between": [4.5, 5.0] } } }`
The rules to generate the query are:
- Never use an "embedding" field in the schema.
- Every query has a "term" field that is a string. It represents the full-text search terms. Can be empty (will match all documents).
- You can use a "where" field that is an object. It represents the filters to apply to the documents. Its keys and values depend on the schema of the database:
- If the field is a "string", you should not use operators. Example: `{ "where": { "title": "champagne" } }`.
- If the field is a "number", you can use the following operators: "gt", "gte", "lt", "lte", "eq", "between". Example: `{ "where": { "price": { "between": [20, 100] } } }`. Another example: `{ "where": { "price": { "lt": 20 } } }`.
- If the field is an "enum", you can use the following operators: "eq", "in", "nin". Example: `{ "where": { "tags": { "containsAll": ["red", "wine"] } } }`.
- If the field is an "string[]", it's gonna be just like the "string" field, but you can use an array of values. Example: `{ "where": { "title": ["champagne", "montagne"] } }`.
- If the field is a "boolean", you can use the following operators: "eq". Example: `{ "where": { "isAvailable": true } }`. Another example: `{ "where": { "isAvailable": false } }`.
- If the field is a "enum[]", you can use the following operators: "containsAll". Example: `{ "where": { "tags": { "containsAll": ["red", "wine"] } } }`.
- Nested properties are supported. Just translate them into dot notation. Example: `{ "where": { "author.name": "John" } }`.
- Array of numbers are not supported.
- Array of booleans are not supported.
Return just a JSON object, nothing more.
"""
QUERY = "Show me some wine reviews with a score greater than 4.5 and less than 5.0."
SCHEMA = {
"title": "string",
"description": "string",
"price": "number",
}
base_model_name = "Qwen/Qwen2.5-7B"
adapter_path = "OramaSearch/query-translator-mini"
print("Loading tokenizer...")
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
print("Loading base model...")
model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
)
print("Loading fine-tuned adapter...")
model = PeftModel.from_pretrained(model, adapter_path)
if torch.cuda.is_available():
model = model.cuda()
print(f"GPU memory after loading: {torch.cuda.memory_allocated(0) / 1024**2:.2f} MB")
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": f"Query: {QUERY}\nSchema: {json.dumps(SCHEMA)}"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
num_return_sequences=1,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Details
The model was trained on a NVIDIA H100 SXM using the following configuration:
- Base Model: Qwen 2.5 7B
- Training Method: LoRA
- Quantization: 4-bit quantization using bitsandbytes
- LoRA Configuration:
- Rank: 16
- Alpha: 32
- Dropout: 0.1
- Target Modules: Attention layers and MLP
- Training Arguments:
- Epochs: 3
- Batch Size: 2
- Learning Rate: 5e-5
- Gradient Accumulation Steps: 8
- FP16 Training: Enabled
- Gradient Checkpointing: Enabled
## Supported Query Types
The model can handle various types of queries including:
1. Simple text search:
```json
{
"term": "prosecco aperitif",
"limit": 5
}
```
2. Numeric range queries:
```json
{
"term": "",
"where": {
"price": {
"between": [20, 100]
}
}
}
```
3. Tag-based filtering:
```json
{
"term": "",
"where": {
"tags": {
"containsAll": ["red", "wine"]
}
}
}
```
## Limitations
- Does not support array of numbers or booleans
- Maximum input length is 1024 tokens
- Embedding fields are not supported in the schema
## Citation
If you use this model in your research, please cite:
```
@misc{query-translator-mini,
author = {OramaSearch Inc.},
title = {Query Translator Mini: Natural Language to Orama Query Translation},
year = {2024},
publisher = {HuggingFace},
journal = {HuggingFace Repository},
howpublished = {\url{https://huggingface.co/OramaSearch/query-translator-mini}}
}
```
## License
AGPLv3
## Acknowledgments
This model builds upon the Qwen 2.5 7B model and uses techniques from the PEFT library. Special thanks to the teams behind these projects.
| null |
Non_BioNLP
|
# Query Translator Mini
This repository contains a fine-tuned version of Qwen 2.5 7B model specialized in translating natural language queries into structured Orama search queries.
The model uses PEFT with LoRA to maintain efficiency while achieving high performance.
## Model Details
### Model Description
The Query Translator Mini model is designed to convert natural language queries into structured JSON queries compatible with the Orama search engine.
It understands various data types and query operators, making it versatile for different search scenarios.
### Key Features
- Translates natural language to structured Orama queries
- Supports multiple field types: string, number, boolean, enum, and arrays
- Handles complex query operators: `gt`, `gte`, `lt`, `lte`, `eq`, `between`, `containsAll`
- Supports nested properties with dot notation
- Works with both full-text search and filtered queries
## Usage
```python
import json, torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
SYSTEM_PROMPT = """
You are a tool used to generate synthetic data of Orama queries. Orama is a full-text, vector, and hybrid search engine.
Let me show you what you need to do with some examples.
Example:
- Query: `"What are the red wines that cost less than 20 dollars?"`
- Schema: `{ "name": "string", "content": "string", "price": "number", "tags": "enum[]" }`
- Generated query: `{ "term": "", "where": { "tags": { "containsAll": ["red", "wine"] }, "price": { "lt": 20 } } }`
Another example:
- Query: `"Show me 5 prosecco wines good for aperitif"`
- Schema: `{ "name": "string", "content": "string", "price": "number", "tags": "enum[]" }`
- Generated query: `{ "term": "prosecco aperitif", "limit": 5 }`
One last example:
- Query: `"Show me some wine reviews with a score greater than 4.5 and less than 5.0."`
- Schema: `{ "title": "string", "content": "string", "reviews": { "score": "number", "text": "string" } }]`
- Generated query: `{ "term": "", "where": { "reviews.score": { "between": [4.5, 5.0] } } }`
The rules to generate the query are:
- Never use an "embedding" field in the schema.
- Every query has a "term" field that is a string. It represents the full-text search terms. Can be empty (will match all documents).
- You can use a "where" field that is an object. It represents the filters to apply to the documents. Its keys and values depend on the schema of the database:
- If the field is a "string", you should not use operators. Example: `{ "where": { "title": "champagne" } }`.
- If the field is a "number", you can use the following operators: "gt", "gte", "lt", "lte", "eq", "between". Example: `{ "where": { "price": { "between": [20, 100] } } }`. Another example: `{ "where": { "price": { "lt": 20 } } }`.
- If the field is an "enum", you can use the following operators: "eq", "in", "nin". Example: `{ "where": { "tags": { "containsAll": ["red", "wine"] } } }`.
- If the field is an "string[]", it's gonna be just like the "string" field, but you can use an array of values. Example: `{ "where": { "title": ["champagne", "montagne"] } }`.
- If the field is a "boolean", you can use the following operators: "eq". Example: `{ "where": { "isAvailable": true } }`. Another example: `{ "where": { "isAvailable": false } }`.
- If the field is a "enum[]", you can use the following operators: "containsAll". Example: `{ "where": { "tags": { "containsAll": ["red", "wine"] } } }`.
- Nested properties are supported. Just translate them into dot notation. Example: `{ "where": { "author.name": "John" } }`.
- Array of numbers are not supported.
- Array of booleans are not supported.
Return just a JSON object, nothing more.
"""
QUERY = "Show me some wine reviews with a score greater than 4.5 and less than 5.0."
SCHEMA = {
"title": "string",
"description": "string",
"price": "number",
}
base_model_name = "Qwen/Qwen2.5-7B"
adapter_path = "OramaSearch/query-translator-mini"
print("Loading tokenizer...")
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
print("Loading base model...")
model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
)
print("Loading fine-tuned adapter...")
model = PeftModel.from_pretrained(model, adapter_path)
if torch.cuda.is_available():
model = model.cuda()
print(f"GPU memory after loading: {torch.cuda.memory_allocated(0) / 1024**2:.2f} MB")
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": f"Query: {QUERY}\nSchema: {json.dumps(SCHEMA)}"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
num_return_sequences=1,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Details
The model was trained on a NVIDIA H100 SXM using the following configuration:
- Base Model: Qwen 2.5 7B
- Training Method: LoRA
- Quantization: 4-bit quantization using bitsandbytes
- LoRA Configuration:
- Rank: 16
- Alpha: 32
- Dropout: 0.1
- Target Modules: Attention layers and MLP
- Training Arguments:
- Epochs: 3
- Batch Size: 2
- Learning Rate: 5e-5
- Gradient Accumulation Steps: 8
- FP16 Training: Enabled
- Gradient Checkpointing: Enabled
## Supported Query Types
The model can handle various types of queries including:
1. Simple text search:
```json
{
"term": "prosecco aperitif",
"limit": 5
}
```
2. Numeric range queries:
```json
{
"term": "",
"where": {
"price": {
"between": [20, 100]
}
}
}
```
3. Tag-based filtering:
```json
{
"term": "",
"where": {
"tags": {
"containsAll": ["red", "wine"]
}
}
}
```
## Limitations
- Does not support array of numbers or booleans
- Maximum input length is 1024 tokens
- Embedding fields are not supported in the schema
## Citation
If you use this model in your research, please cite:
```
@misc{query-translator-mini,
author = {OramaSearch Inc.},
title = {Query Translator Mini: Natural Language to Orama Query Translation},
year = {2024},
publisher = {HuggingFace},
journal = {HuggingFace Repository},
howpublished = {\url{https://huggingface.co/OramaSearch/query-translator-mini}}
}
```
## License
AGPLv3
## Acknowledgments
This model builds upon the Qwen 2.5 7B model and uses techniques from the PEFT library. Special thanks to the teams behind these projects.
|
{"base_model": "Qwen/Qwen2.5-7B", "datasets": ["OramaSearch/nlp-to-query-small"], "language": ["en"], "library_name": "peft", "license": "agpl-3.0"}
|
task
|
[
"TRANSLATION"
] | 45,469 |
localmodels/Airoboros-33B-gpt4-1.4-GPTQ
|
localmodels
|
text-generation
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-14T05:55:28Z |
2023-07-14T05:55:28+00:00
| 8 | 0 |
---
duplicated_from: localmodels/LLM
---
# Airoboros 33B GPT4 1.4 GPTQ
From: https://huggingface.co/jondurbin/airoboros-33B-gpt4-1.4
---
| Model | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. |
---
# Model Card: Airoboros 33B GPT4 1.4
## Overview
This is a qlora fine-tune 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js.
PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMA, which has its own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
| null |
Non_BioNLP
|
# Airoboros 33B GPT4 1.4 GPTQ
From: https://huggingface.co/jondurbin/airoboros-33B-gpt4-1.4
---
| Model | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. |
---
# Model Card: Airoboros 33B GPT4 1.4
## Overview
This is a qlora fine-tune 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js.
PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMA, which has its own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
{"duplicated_from": "localmodels/LLM"}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,470 |
ruatnec/upstageai-6team-kobge-model
|
ruatnec
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"ko",
"en",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-17T06:07:54Z |
2024-10-17T06:22:24+00:00
| 12 | 0 |
---
base_model:
- BAAI/bge-m3
language:
- ko
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
widget:
- source_sentence: 대한지적공사 관계자는 "오랜 진통 끝에 지적재조사사업을 추진하게 돼 기쁘다"면서도 뭐라고 말했어?
sentences:
- '2018 평창 동계올림픽이 개막하기 전 ''공공의 적''은 영하 10도를 넘는 추위였다. 개막을 즈음해 추위는 조금 수그러드는가 싶더니 바람이
멈추지 않아 대회 2일 차부터 경기가 잇달아 연기·취소됐다. 올림픽 조직위원회와 국제스키연맹(FIS)은 11일 오전 11시 정선 알파인 경기장에서
열릴 예정이던 알파인 스키 남자 활강 경기를 강풍으로 연기하기로 했다고 밝혔다. FIS는 “강풍이 경기장에 하루 종일 계속 불 것으로 전망돼
일정을 연기했다”고 밝혔다. 조직위는 연기된 남자 활강 경기를 오는 15일 오전 11시에 치르고, 이 시간대에 원래 열릴 예정이던 남자 슈퍼대회전
경기 시간을 하루 뒤인 16일 오전 11시로 순연하기로 했다.
이어 이날 오후 1시30분부터 열릴 예정이던 스노보드 여자 슬로프스타일 예선 경기는 연기를 거듭하다 취소됐다. 조직위는 예선 없이 다음 날
결선에서 참가자 27명이 한번에 경기해 순위를 가리기로 했다.
강풍이 경기 진행에 영향을 미칠 것이란 예상은 대회 전부터 있었다. 올림픽 대회 슬로프가 설치된 정선·용평 알파인 경기장과 휘닉스 스노 경기장은
슬로프 상단부의 해발고도가 900m가 넘는다. 임장호 조직위 기상기후팀장은 “알파인 스키는 상단부에 강한 바람이 불면, 선수들을 실어나르는
곤돌라를 움직이기 어렵다”며 “스노보드나 프리스타일 스키는 순간적인 돌풍이 불 때 선수들이 다칠 가능성도 있다”고 말했다.
바람이 경기에 미치는 영향을 알기에 조직위도 강풍을 비롯한 5가지 긴급 기상 상황을 가정해 경기 운영 매뉴얼을 만들었다. 이날 경기 취소도
매뉴얼에 따른 조치였다. 임 팀장은 “12~13일 바람이 잦아들다가 14일에 다시 강풍이 불겠지만, 15일부터는 다시 잦아들 것으로 보고 있다”며
“향후 강풍으로 경기가 연기돼도 올림픽 폐막 전 최대한 모든 경기를 끝내려 하고 있다”고 했다. 다만 경기 일정이 바뀌면 참가 선수들과 코칭스태프가
어떻게 컨디션을 조절하며 경기를 준비할지 깊은 고민에 빠질 것으로 보인다.'
- 지적도면과 실제 경계가 맞지 않는 '지적불부합지'에 대한 재조사가 실시된다. 국토해양부는 지적도상 경계와 실제 경계가 일치하지 않는 지적불부합지에
대해 2030년까지 지적재조사를 추진한다고 지난달 30일 밝혔다. 이와 관련 김기현 의원이 대표발의한 지적재조사특별법안이 이날 국회 상임위를
통과했다. 지적불부합지는 경계분쟁과 민원의 대상이 되고 있는데, 현재 전체 필지의 약 15%(554만필지)에 이를 것으로 추정된다. 특히 상당수는
지적측량이 불가능해 소유권 이전이나 건축행위 등 재산권 행사가 불가능하거나 제한받고 있어 조정이 시급한 상황이다. 이에 따라 1995년 지적재조사사업추진
기본계획이 수립되고, 이듬해 지적재조사특별법이 입법예고됐지만 관련 부처들의 반대로 무산됐다. 이후 2000년 다시 재조사사업 기본계획이 수립되고,
2006년 토지조사특별법안이 제출됐으나 성사되지 못한 채 오늘에 이르고 있다. 지적불부합지는 100년 전 낙후된 기술로 만든 종이지적을 계속
사용하면서 종이도면의 신축, 경계선의 굵기, 개인오차 등으로 생겨났다. 또 대장이 토지·임야대장으로 이원화돼 있고, 도면도 7종의 축척으로
등록된 것도 원인으로 꼽힌다. 일례로 1:1200 축척의 압구정동 대지(280㎡, 1000만원/㎡)의 경우 지적도상 경계가 0.8mm 오차가
나면 실제 면적에선 27㎡의 차이가 발생, 약 2억7000만원의 땅값이 차이나게 된다. 6·25전쟁으로 전국 106만1000필지의 지적공부가
분·소실되고, 약 80%의 지적측량기준점을 잃어버린 것도 한 원인이다. 토지공법학회는 2005년 지적불부합에 따른 경계분쟁으로 연간 약 3800억원의
소송비용이 발생한 것으로 추정했다. 또 경계확인측량으로 연간 900억원의 비용이 지출되고 있다. 정부는 총 8410억원을 투입, 2020년까지
280만필지를, 나머지 274만필지는 2030년까지 정비할 계획이다. 국토부 관계자는 "지적불부합지가 정비되면 경계분쟁이 해소돼 사회적 비용을
절감할 수 있고, 개인의 재산권 행사도 수월해 질 것"이라고 기대했다. 그러나 전국에 걸친 전면적인 지적재조사가 아니라 불부합지를 중심으로
한 단계적 추진이어서 한계가 있다는 지적이다. 앞으로 재조사가 진행되면 불부합지가 계속 나타나게 될 것인데 그 때마다 경계조정을 해야 하는
번거로움이 있다는 것. 특히 불부합지에 대한 경계조정은 이해가 첨예하게 충돌하다 보니 사업추진이 매우 어렵다. 이 때문에 전면적인 재조사를
통해 한 번에 마무리하는 것이 수월하다는 설명이다. 대한지적공사 관계자는 "오랜 진통 끝에 지적재조사사업을 추진하게 돼 기쁘다"면서도 "원래
전면적인 사업추진을 원했으나 예산 등의 문제로 단계적으로 진행하게 돼 아쉽다"고 말했다.
model-index:
- name: SentenceTransformer
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: miracl
type: miracl
metrics:
- type: cosine_accuracy@1
value: 0.6103286384976526
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8169014084507042
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8732394366197183
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.92018779342723
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6103286384976526
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.378716744913928
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.27605633802816903
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.17276995305164322
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3846655691726114
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5901991071005155
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6794216477315068
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7694903427297795
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6723275985412543
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7262426410313736
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6073885234240499
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.6103286384976526
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.8169014084507042
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8732394366197183
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.92018779342723
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.6103286384976526
name: Dot Precision@1
- type: dot_precision@3
value: 0.378716744913928
name: Dot Precision@3
- type: dot_precision@5
value: 0.27605633802816903
name: Dot Precision@5
- type: dot_precision@10
value: 0.17276995305164322
name: Dot Precision@10
- type: dot_recall@1
value: 0.3846655691726114
name: Dot Recall@1
- type: dot_recall@3
value: 0.5901991071005155
name: Dot Recall@3
- type: dot_recall@5
value: 0.6794216477315068
name: Dot Recall@5
- type: dot_recall@10
value: 0.7694903427297795
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6723275985412543
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7262426410313736
name: Dot Mrr@10
- type: dot_map@100
value: 0.6073885234240499
name: Dot Map@100
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the train_set dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
- Learning other languages besides Chinese and English is insufficient, so additional learning is needed to optimize use of other languages.
- This model is additionally trained on the Korean dataset.
### Model Description
- **Model Type:** Sentence Transformer
Transformer Encoder
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
| null |
Non_BioNLP
|
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the train_set dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
- Learning other languages besides Chinese and English is insufficient, so additional learning is needed to optimize use of other languages.
- This model is additionally trained on the Korean dataset.
### Model Description
- **Model Type:** Sentence Transformer
Transformer Encoder
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
|
{"base_model": ["BAAI/bge-m3"], "language": ["ko", "en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100", "dot_accuracy@1", "dot_accuracy@3", "dot_accuracy@5", "dot_accuracy@10", "dot_precision@1", "dot_precision@3", "dot_precision@5", "dot_precision@10", "dot_recall@1", "dot_recall@3", "dot_recall@5", "dot_recall@10", "dot_ndcg@10", "dot_mrr@10", "dot_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer"], "widget": [{"source_sentence": "대한지적공사 관계자는 \"오랜 진통 끝에 지적재조사사업을 추진하게 돼 기쁘다\"면서도 뭐라고 말했어?", "sentences": ["2018 평창 동계올림픽이 개막하기 전 '공공의 적'은 영하 10도를 넘는 추위였다. 개막을 즈음해 추위는 조금 수그러드는가 싶더니 바람이 멈추지 않아 대회 2일 차부터 경기가 잇달아 연기·취소됐다. 올림픽 조직위원회와 국제스키연맹(FIS)은 11일 오전 11시 정선 알파인 경기장에서 열릴 예정이던 알파인 스키 남자 활강 경기를 강풍으로 연기하기로 했다고 밝혔다. FIS는 “강풍이 경기장에 하루 종일 계속 불 것으로 전망돼 일정을 연기했다”고 밝혔다. 조직위는 연기된 남자 활강 경기를 오는 15일 오전 11시에 치르고, 이 시간대에 원래 열릴 예정이던 남자 슈퍼대회전 경기 시간을 하루 뒤인 16일 오전 11시로 순연하기로 했다.\n이어 이날 오후 1시30분부터 열릴 예정이던 스노보드 여자 슬로프스타일 예선 경기는 연기를 거듭하다 취소됐다. 조직위는 예선 없이 다음 날 결선에서 참가자 27명이 한번에 경기해 순위를 가리기로 했다.\n강풍이 경기 진행에 영향을 미칠 것이란 예상은 대회 전부터 있었다. 올림픽 대회 슬로프가 설치된 정선·용평 알파인 경기장과 휘닉스 스노 경기장은 슬로프 상단부의 해발고도가 900m가 넘는다. 임장호 조직위 기상기후팀장은 “알파인 스키는 상단부에 강한 바람이 불면, 선수들을 실어나르는 곤돌라를 움직이기 어렵다”며 “스노보드나 프리스타일 스키는 순간적인 돌풍이 불 때 선수들이 다칠 가능성도 있다”고 말했다.\n바람이 경기에 미치는 영향을 알기에 조직위도 강풍을 비롯한 5가지 긴급 기상 상황을 가정해 경기 운영 매뉴얼을 만들었다. 이날 경기 취소도 매뉴얼에 따른 조치였다. 임 팀장은 “12~13일 바람이 잦아들다가 14일에 다시 강풍이 불겠지만, 15일부터는 다시 잦아들 것으로 보고 있다”며 “향후 강풍으로 경기가 연기돼도 올림픽 폐막 전 최대한 모든 경기를 끝내려 하고 있다”고 했다. 다만 경기 일정이 바뀌면 참가 선수들과 코칭스태프가 어떻게 컨디션을 조절하며 경기를 준비할지 깊은 고민에 빠질 것으로 보인다.", "지적도면과 실제 경계가 맞지 않는 '지적불부합지'에 대한 재조사가 실시된다. 국토해양부는 지적도상 경계와 실제 경계가 일치하지 않는 지적불부합지에 대해 2030년까지 지적재조사를 추진한다고 지난달 30일 밝혔다. 이와 관련 김기현 의원이 대표발의한 지적재조사특별법안이 이날 국회 상임위를 통과했다. 지적불부합지는 경계분쟁과 민원의 대상이 되고 있는데, 현재 전체 필지의 약 15%(554만필지)에 이를 것으로 추정된다. 특히 상당수는 지적측량이 불가능해 소유권 이전이나 건축행위 등 재산권 행사가 불가능하거나 제한받고 있어 조정이 시급한 상황이다. 이에 따라 1995년 지적재조사사업추진 기본계획이 수립되고, 이듬해 지적재조사특별법이 입법예고됐지만 관련 부처들의 반대로 무산됐다. 이후 2000년 다시 재조사사업 기본계획이 수립되고, 2006년 토지조사특별법안이 제출됐으나 성사되지 못한 채 오늘에 이르고 있다. 지적불부합지는 100년 전 낙후된 기술로 만든 종이지적을 계속 사용하면서 종이도면의 신축, 경계선의 굵기, 개인오차 등으로 생겨났다. 또 대장이 토지·임야대장으로 이원화돼 있고, 도면도 7종의 축척으로 등록된 것도 원인으로 꼽힌다. 일례로 1:1200 축척의 압구정동 대지(280㎡, 1000만원/㎡)의 경우 지적도상 경계가 0.8mm 오차가 나면 실제 면적에선 27㎡의 차이가 발생, 약 2억7000만원의 땅값이 차이나게 된다. 6·25전쟁으로 전국 106만1000필지의 지적공부가 분·소실되고, 약 80%의 지적측량기준점을 잃어버린 것도 한 원인이다. 토지공법학회는 2005년 지적불부합에 따른 경계분쟁으로 연간 약 3800억원의 소송비용이 발생한 것으로 추정했다. 또 경계확인측량으로 연간 900억원의 비용이 지출되고 있다. 정부는 총 8410억원을 투입, 2020년까지 280만필지를, 나머지 274만필지는 2030년까지 정비할 계획이다. 국토부 관계자는 \"지적불부합지가 정비되면 경계분쟁이 해소돼 사회적 비용을 절감할 수 있고, 개인의 재산권 행사도 수월해 질 것\"이라고 기대했다. 그러나 전국에 걸친 전면적인 지적재조사가 아니라 불부합지를 중심으로 한 단계적 추진이어서 한계가 있다는 지적이다. 앞으로 재조사가 진행되면 불부합지가 계속 나타나게 될 것인데 그 때마다 경계조정을 해야 하는 번거로움이 있다는 것. 특히 불부합지에 대한 경계조정은 이해가 첨예하게 충돌하다 보니 사업추진이 매우 어렵다. 이 때문에 전면적인 재조사를 통해 한 번에 마무리하는 것이 수월하다는 설명이다. 대한지적공사 관계자는 \"오랜 진통 끝에 지적재조사사업을 추진하게 돼 기쁘다\"면서도 \"원래 전면적인 사업추진을 원했으나 예산 등의 문제로 단계적으로 진행하게 돼 아쉽다\"고 말했다."]}], "model-index": [{"name": "SentenceTransformer", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "miracl", "type": "miracl"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6103286384976526, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8169014084507042, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8732394366197183, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.92018779342723, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6103286384976526, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.378716744913928, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.27605633802816903, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.17276995305164322, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.3846655691726114, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.5901991071005155, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.6794216477315068, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.7694903427297795, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.6723275985412543, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7262426410313736, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6073885234240499, "name": "Cosine Map@100"}, {"type": "dot_accuracy@1", "value": 0.6103286384976526, "name": "Dot Accuracy@1"}, {"type": "dot_accuracy@3", "value": 0.8169014084507042, "name": "Dot Accuracy@3"}, {"type": "dot_accuracy@5", "value": 0.8732394366197183, "name": "Dot Accuracy@5"}, {"type": "dot_accuracy@10", "value": 0.92018779342723, "name": "Dot Accuracy@10"}, {"type": "dot_precision@1", "value": 0.6103286384976526, "name": "Dot Precision@1"}, {"type": "dot_precision@3", "value": 0.378716744913928, "name": "Dot Precision@3"}, {"type": "dot_precision@5", "value": 0.27605633802816903, "name": "Dot Precision@5"}, {"type": "dot_precision@10", "value": 0.17276995305164322, "name": "Dot Precision@10"}, {"type": "dot_recall@1", "value": 0.3846655691726114, "name": "Dot Recall@1"}, {"type": "dot_recall@3", "value": 0.5901991071005155, "name": "Dot Recall@3"}, {"type": "dot_recall@5", "value": 0.6794216477315068, "name": "Dot Recall@5"}, {"type": "dot_recall@10", "value": 0.7694903427297795, "name": "Dot Recall@10"}, {"type": "dot_ndcg@10", "value": 0.6723275985412543, "name": "Dot Ndcg@10"}, {"type": "dot_mrr@10", "value": 0.7262426410313736, "name": "Dot Mrr@10"}, {"type": "dot_map@100", "value": 0.6073885234240499, "name": "Dot Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,471 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.