id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LoneStriker/airoboros-c34b-3.1.2-5.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:jondurbin/airoboros-3.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-10-20T18:13:28Z |
2023-10-22T14:07:11+00:00
| 3 | 0 |
---
datasets:
- jondurbin/airoboros-3.1
license: llama2
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
| null |
Non_BioNLP
|
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
{"datasets": ["jondurbin/airoboros-3.1"], "license": "llama2"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 46,727 |
briefai/LongShort-Mistral-7B
|
briefai
|
text-generation
|
[
"safetensors",
"pytorch",
"mistral",
"Gen-AI",
"Finance",
"KPI Extraction",
"text-generation",
"conversational",
"en",
"dataset:briefai/LongShort-Dataset",
"license:apache-2.0",
"region:us"
] | 2023-11-29T18:42:40Z |
2024-01-18T21:50:03+00:00
| 0 | 0 |
---
datasets:
- briefai/LongShort-Dataset
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- pytorch
- mistral
- Gen-AI
- Finance
- KPI Extraction
---
# LongShort-Mistral-7B
### Model Description
LongShort-Mistral-7B is a large language model fine-tuned on earnings call documents to extract financial KPIs from the earnings call documents. It is based on the Mistral-7B Instruct Architecture.
- Model creator: [Brief AI](https://huggingface.co/briefai)
- Original model: [Mistral-7B-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
### Dataset Description
- Data Source: Factiva
- Data Description: 28K+ Earnings Call Documents
- Data Scope: 1K+ public companies
- Fine Tuning Data: Collection of 60K+ samples.
## Prompt template: LongShort-Mistral-7B
```
[INST]Given the context, answer the question.
### Question:
Extract all the finance-based performance indicators and evaluation metrics.
### Context:
{context}
### Answer:
[/INST]
```
## Basics
*This section provides information about the model type, version, license, funders, release date, developers, and contact information.*
*It is useful for anyone who wants to reference the model.*
**Developed by:** [Brief AI Team](https://huggingface.co/briefai)
**Model Type:** Transformer-based Large Language Model
**Version:** 1.0.0
**Languages:** English
**License:** Apache 2.0
**Release Date Estimate:** Wednesday, 29.November.2023
**Send Questions to:** [email protected]
**Cite as:** Brief AI LongShort Language Model
**Funded by:** UChicago Data Science Institute
**Mentored by:** Nick Kadochnikov
## Technical Specifications
*This section includes details about the model objective and architecture, and the compute infrastructure.*
*It is useful for people interested in model development.*
Please see [the LongShort training README](https://github.com/brief-ai-uchicago/LongShort-Dataset) for full details on replicating training.
### Model Architecture and Objective
* Modified from Mistral-7B-Instruct
**Objective:** Financial KPI extraction from earnings call documents.
### Hardware and Software - Compute Infrastructure
* 4 NVIDIA L4 GPUs & 48 vCPUs
* Environment: PyTorch (pytorch-2.0 w/ CUDA-11.8; see [Github link](https://github.com/pytorch/pytorch))
* CPU: GCP G2 Standard 48 (Platform: Intel Cascade Lake) (Accelerator Optimized)
* CPU memory: 192GB RAM
* GPU memory: 30GB per GPU
## Training
*This section provides information about the training.*
*It is useful for people who want to learn more about the model inputs and training footprint.*
The following bits and bytes quantization config was used during training:
* quant_method: bitsandbytes
* load_in_8bit: False
* load_in_4bit: True
* llm_int8_threshold: 6.0
* llm_int8_skip_modules: None
* llm_int8_enable_fp32_cpu_offload: False
* llm_int8_has_fp16_weight: False
* bnb_4bit_quant_type: nf4
* bnb_4bit_use_double_quant: True
* bnb_4bit_compute_dtype: float16
Framework versions
* PEFT 0.4.0
### Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for the dataset can be found in [LongShort Dataset](https://github.com/brief-ai-uchicago/LongShort-Dataset)
Training data includes:
- 5000 Earnings Call Documents
## How to use
This model can be easily used and deployed using HuggingFace's ecosystem. This needs `transformers` and `accelerate` installed. The model can be downloaded as follows:
[LongShort-Mistral-7B](https://huggingface.co/briefai/LongShort-Mistral-7B)
## Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pre-trained base model that can be further fine-tuned for specific tasks. The use cases below are not exhaustive.
### Direct Use
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
### Downstream Use
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
#### Out-of-scope Uses
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but may not be correct.
Out-of-scope Uses Include:
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### Misuse
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
## Intended Users
### Direct Users
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Financial Industry
# Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
- Induce users into attributing human traits to it, such as sentience or consciousness
# Evaluation
*This section describes the evaluation protocols and provides the results.*
Result: LongShort-Llama-2-13B gives 43.4% accuracy on a validation set of 10% of the original training dataset.
**Train-time Evaluation:**
Final checkpoint after 300 epochs:
- Training Loss: 1.228
# Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
# Model Card Authors
Vishal Parameshwaran, Garima Sohi, Jose Gerala, Sanchit Narayan Kumar
| null |
Non_BioNLP
|
# LongShort-Mistral-7B
### Model Description
LongShort-Mistral-7B is a large language model fine-tuned on earnings call documents to extract financial KPIs from the earnings call documents. It is based on the Mistral-7B Instruct Architecture.
- Model creator: [Brief AI](https://huggingface.co/briefai)
- Original model: [Mistral-7B-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
### Dataset Description
- Data Source: Factiva
- Data Description: 28K+ Earnings Call Documents
- Data Scope: 1K+ public companies
- Fine Tuning Data: Collection of 60K+ samples.
## Prompt template: LongShort-Mistral-7B
```
[INST]Given the context, answer the question.
### Question:
Extract all the finance-based performance indicators and evaluation metrics.
### Context:
{context}
### Answer:
[/INST]
```
## Basics
*This section provides information about the model type, version, license, funders, release date, developers, and contact information.*
*It is useful for anyone who wants to reference the model.*
**Developed by:** [Brief AI Team](https://huggingface.co/briefai)
**Model Type:** Transformer-based Large Language Model
**Version:** 1.0.0
**Languages:** English
**License:** Apache 2.0
**Release Date Estimate:** Wednesday, 29.November.2023
**Send Questions to:** [email protected]
**Cite as:** Brief AI LongShort Language Model
**Funded by:** UChicago Data Science Institute
**Mentored by:** Nick Kadochnikov
## Technical Specifications
*This section includes details about the model objective and architecture, and the compute infrastructure.*
*It is useful for people interested in model development.*
Please see [the LongShort training README](https://github.com/brief-ai-uchicago/LongShort-Dataset) for full details on replicating training.
### Model Architecture and Objective
* Modified from Mistral-7B-Instruct
**Objective:** Financial KPI extraction from earnings call documents.
### Hardware and Software - Compute Infrastructure
* 4 NVIDIA L4 GPUs & 48 vCPUs
* Environment: PyTorch (pytorch-2.0 w/ CUDA-11.8; see [Github link](https://github.com/pytorch/pytorch))
* CPU: GCP G2 Standard 48 (Platform: Intel Cascade Lake) (Accelerator Optimized)
* CPU memory: 192GB RAM
* GPU memory: 30GB per GPU
## Training
*This section provides information about the training.*
*It is useful for people who want to learn more about the model inputs and training footprint.*
The following bits and bytes quantization config was used during training:
* quant_method: bitsandbytes
* load_in_8bit: False
* load_in_4bit: True
* llm_int8_threshold: 6.0
* llm_int8_skip_modules: None
* llm_int8_enable_fp32_cpu_offload: False
* llm_int8_has_fp16_weight: False
* bnb_4bit_quant_type: nf4
* bnb_4bit_use_double_quant: True
* bnb_4bit_compute_dtype: float16
Framework versions
* PEFT 0.4.0
### Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for the dataset can be found in [LongShort Dataset](https://github.com/brief-ai-uchicago/LongShort-Dataset)
Training data includes:
- 5000 Earnings Call Documents
## How to use
This model can be easily used and deployed using HuggingFace's ecosystem. This needs `transformers` and `accelerate` installed. The model can be downloaded as follows:
[LongShort-Mistral-7B](https://huggingface.co/briefai/LongShort-Mistral-7B)
## Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pre-trained base model that can be further fine-tuned for specific tasks. The use cases below are not exhaustive.
### Direct Use
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
### Downstream Use
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
#### Out-of-scope Uses
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but may not be correct.
Out-of-scope Uses Include:
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### Misuse
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
## Intended Users
### Direct Users
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Financial Industry
# Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
- Induce users into attributing human traits to it, such as sentience or consciousness
# Evaluation
*This section describes the evaluation protocols and provides the results.*
Result: LongShort-Llama-2-13B gives 43.4% accuracy on a validation set of 10% of the original training dataset.
**Train-time Evaluation:**
Final checkpoint after 300 epochs:
- Training Loss: 1.228
# Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
# Model Card Authors
Vishal Parameshwaran, Garima Sohi, Jose Gerala, Sanchit Narayan Kumar
|
{"datasets": ["briefai/LongShort-Dataset"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["pytorch", "mistral", "Gen-AI", "Finance", "KPI Extraction"]}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 46,728 |
RichardErkhov/artificialguybr_-_Gemma2-2B-OpenHermes2.5-4bits
|
RichardErkhov
| null |
[
"safetensors",
"gemma2",
"4-bit",
"bitsandbytes",
"region:us"
] | 2025-01-11T07:24:32Z |
2025-01-11T07:25:38+00:00
| 5 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Gemma2-2B-OpenHermes2.5 - bnb 4bits
- Model creator: https://huggingface.co/artificialguybr/
- Original model: https://huggingface.co/artificialguybr/Gemma2-2B-OpenHermes2.5/
Original model description:
---
tags:
- GEMMA
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
model-index:
- name: google/gemma-2-2b
results: []
license: apache-2.0
language:
- en
library_name: transformers
datasets:
- teknium/OpenHermes-2.5
---
# Model Card for GEMMA2-2B-openhermes-2.5
This model is a fine-tuned version of Gemma 2 -2B on the OpenHermes-2.5 dataset.
## Model Details
### Model Description
This is a fine-tuned version of the google/gemma-2-2b model, trained on the OpenHermes-2.5 dataset. It is designed for instruction following and general language tasks.
- **Developed by:** artificialguybr
- **Model type:** Causal Language Model
- **Language(s):** English
- **License:** apache-2.0
- **Finetuned from model:** google/gemma-2-2b
### Model Sources
- **Repository:** https://huggingface.co/artificialguybr/Gemma2-2B-OpenHermes2.5
## Uses
This model can be used for various natural language processing tasks, particularly those involving instruction following and general language understanding.
### Direct Use
The model can be used for tasks such as text generation, question answering, and other language-related applications.
### Out-of-Scope Use
The model should not be used for generating harmful or biased content. Users should be aware of potential biases in the training data.
## Training Details
### Training Data
The model was fine-tuned on the teknium/OpenHermes-2.5 dataset.
### Training Procedure
#### Hardware and Software
- **Hardware:** NVIDIA A100-SXM4-80GB (1 GPU)
- **Software Framework:** 🤗 Transformers, Axolotl
## Limitations and Biases
More information is needed about specific limitations and biases of this model.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Gemma2-2B-OpenHermes2.5 - bnb 4bits
- Model creator: https://huggingface.co/artificialguybr/
- Original model: https://huggingface.co/artificialguybr/Gemma2-2B-OpenHermes2.5/
Original model description:
---
tags:
- GEMMA
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
model-index:
- name: google/gemma-2-2b
results: []
license: apache-2.0
language:
- en
library_name: transformers
datasets:
- teknium/OpenHermes-2.5
---
# Model Card for GEMMA2-2B-openhermes-2.5
This model is a fine-tuned version of Gemma 2 -2B on the OpenHermes-2.5 dataset.
## Model Details
### Model Description
This is a fine-tuned version of the google/gemma-2-2b model, trained on the OpenHermes-2.5 dataset. It is designed for instruction following and general language tasks.
- **Developed by:** artificialguybr
- **Model type:** Causal Language Model
- **Language(s):** English
- **License:** apache-2.0
- **Finetuned from model:** google/gemma-2-2b
### Model Sources
- **Repository:** https://huggingface.co/artificialguybr/Gemma2-2B-OpenHermes2.5
## Uses
This model can be used for various natural language processing tasks, particularly those involving instruction following and general language understanding.
### Direct Use
The model can be used for tasks such as text generation, question answering, and other language-related applications.
### Out-of-Scope Use
The model should not be used for generating harmful or biased content. Users should be aware of potential biases in the training data.
## Training Details
### Training Data
The model was fine-tuned on the teknium/OpenHermes-2.5 dataset.
### Training Procedure
#### Hardware and Software
- **Hardware:** NVIDIA A100-SXM4-80GB (1 GPU)
- **Software Framework:** 🤗 Transformers, Axolotl
## Limitations and Biases
More information is needed about specific limitations and biases of this model.
|
{}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,729 |
odunola/distillbert-distilled-ag-news-2
|
odunola
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"base_model:google/bert_uncased_L-8_H-256_A-4",
"base_model:finetune:google/bert_uncased_L-8_H-256_A-4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-13T21:50:30Z |
2023-11-14T16:30:47+00:00
| 6 | 0 |
---
base_model: google/bert_uncased_L-8_H-256_A-4
datasets:
- ag_news
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distillbert-distilled-ag-news-2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ag_news
type: ag_news
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.9407916666666667
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-distilled-ag-news-2
This model is a fine-tuned version of [google/bert_uncased_L-8_H-256_A-4](https://huggingface.co/google/bert_uncased_L-8_H-256_A-4) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1945
- Accuracy: 0.9408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.238 | 1.0 | 3000 | 0.2240 | 0.9237 |
| 0.1873 | 2.0 | 6000 | 0.2009 | 0.9329 |
| 0.1597 | 3.0 | 9000 | 0.1919 | 0.9377 |
| 0.1495 | 4.0 | 12000 | 0.1948 | 0.9400 |
| 0.1303 | 5.0 | 15000 | 0.1945 | 0.9408 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-distilled-ag-news-2
This model is a fine-tuned version of [google/bert_uncased_L-8_H-256_A-4](https://huggingface.co/google/bert_uncased_L-8_H-256_A-4) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1945
- Accuracy: 0.9408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.238 | 1.0 | 3000 | 0.2240 | 0.9237 |
| 0.1873 | 2.0 | 6000 | 0.2009 | 0.9329 |
| 0.1597 | 3.0 | 9000 | 0.1919 | 0.9377 |
| 0.1495 | 4.0 | 12000 | 0.1948 | 0.9400 |
| 0.1303 | 5.0 | 15000 | 0.1945 | 0.9408 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
{"base_model": "google/bert_uncased_L-8_H-256_A-4", "datasets": ["ag_news"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distillbert-distilled-ag-news-2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ag_news", "type": "ag_news", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9407916666666667, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,730 |
cvapict/yhi-message-topic-all-MiniLM-L12-v2
|
cvapict
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-09-20T12:02:48Z |
2023-09-20T21:50:30+00:00
| 23 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# cvapict/yhi-message-type-v2-all-MiniLM-L12-v2
{'accuracy': 0.8269230769230769}
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("cvapict/yhi-message-type-v2-all-MiniLM-L12-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# cvapict/yhi-message-type-v2-all-MiniLM-L12-v2
{'accuracy': 0.8269230769230769}
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("cvapict/yhi-message-type-v2-all-MiniLM-L12-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,731 |
FabsCool/autotrain-T5Base1_1-728922203
|
FabsCool
|
text2text-generation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"unk",
"dataset:FabsCool/autotrain-data-T5Base1_1",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-04-11T06:19:13Z |
2022-04-11T10:31:58+00:00
| 116 | 0 |
---
datasets:
- FabsCool/autotrain-data-T5Base1_1
language: unk
tags:
- a
- u
- t
- o
- r
- i
- n
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions: 583.728921803621
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 728922203
- CO2 Emissions (in grams): 583.728921803621
## Validation Metrics
- Loss: 1.2922444343566895
- Rouge1: 54.3928
- Rouge2: 31.666
- RougeL: 50.3552
- RougeLsum: 50.3694
- Gen Len: 13.3425
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/FabsCool/autotrain-T5Base1_1-728922203
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 728922203
- CO2 Emissions (in grams): 583.728921803621
## Validation Metrics
- Loss: 1.2922444343566895
- Rouge1: 54.3928
- Rouge2: 31.666
- RougeL: 50.3552
- RougeLsum: 50.3694
- Gen Len: 13.3425
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/FabsCool/autotrain-T5Base1_1-728922203
```
|
{"datasets": ["FabsCool/autotrain-data-T5Base1_1"], "language": "unk", "tags": ["a", "u", "t", "o", "r", "i", "n"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": 583.728921803621}
|
task
|
[
"SUMMARIZATION"
] | 46,732 |
AntX-ai/AntX-7B
|
AntX-ai
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"dataset:BAAI/COIG-PC",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-07-29T02:31:06Z |
2023-07-29T03:05:21+00:00
| 18 | 2 |
---
datasets:
- BAAI/COIG-PC
language:
- zh
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is an experimental product that can be used to create new LLM bassed on Chinese language.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** yjf9966
- **Model type:** LLaMA with enhanced tokenizer-size-49954
- **Language(s) (NLP):** Chinese/English
- **License:** Apache-2.0
- **Finetuned from model:** [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/AntX-ai/AntX-7B
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.
It also inherits some of the bias of its dataset model.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import LlamaForCausalLM, LlamaTokenizer
import torch
base_model_name = "AntX-ai/AntX-7B"
load_type = torch.float16
device = None
generation_config = dict(
temperature=0.2,
top_k=40,
top_p=0.9,
do_sample=True,
num_beams=1,
repetition_penalty=1.3,
max_new_tokens=400
)
prompt_input = (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n\n{instruction}\n\n### Response:\n\n"
)
if torch.cuda.is_available():
device = torch.device(0)
else:
device = torch.device('cpu')
def generate_prompt(instruction, input=None):
if input:
instruction = instruction + '\n' + input
return prompt_input.format_map({'instruction': instruction})
tokenizer = LlamaTokenizer.from_pretrained(base_model_name)
model = LlamaForCausalLM.from_pretrained(
base_model_name,
load_in_8bit=False,
torch_dtype=load_type,
low_cpu_mem_usage=True,
device_map='auto',
)
model_vocab_size = model.get_input_embeddings().weight.size(0)
tokenzier_vocab_size = len(tokenizer)
if model_vocab_size != tokenzier_vocab_size:
model.resize_token_embeddings(tokenzier_vocab_size)
raw_input_text = input("Input:")
input_text = generate_prompt(instruction=raw_input_text)
inputs = tokenizer(input_text, return_tensors="pt")
generation_output = model.generate(
input_ids=inputs["input_ids"].to(device),
attention_mask=inputs['attention_mask'].to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
**generation_config
)
s = generation_output[0]
output = tokenizer.decode(s, skip_special_tokens=True)
response = output.split("### Response:")[1].strip()
print("Response: ", response)
print("\n")
```
## Training Details
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
80% for train dataset and 20% for test dataset
#### Training Hyperparameters
- **Training regime:** fp16 mixed precision, lr=1e-4, lora_rank=8, lora_alpha=32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
#### Testing Data
<!-- This should link to a Data Card if possible. -->
20% of the BAAI/COIG-PC dataset.
```
Input:王国维说:“自周之衰,文王、周公势力之瓦解也,国民之智力成熟于内,政治之纷乱乘之于外,上无统一之制度,下迫于社会之要求,于是诸于九流各创其学说。” 他意在说明 A. 分封制的崩溃 B. 商鞅变法的作用 C. 兼并战争的后果 D. 百家争鸣的原因
Response: 本题考查对材料的理解。A错误;B正确;C和D与材料无关。故选BC两项即可
Input:经济基础是指一定社会发展阶段占统治地位的生产关系各个方面的总和。解决了地方经济问题, 也就解 决了地方割据问题, 为此, 宋太祖采取的措施是( ) A . 地方设转运使, 财赋收归中央 B . 派文臣担任各地州县的长官 C . 派文臣管理地方政事 D . 采取分化事权办法, 削弱相权
Response: A: 本题考查对宋太祖治下地方问题的认识。 A : 依据材料可知, 在北宋时期 , 由于地主阶级的发展壮大以及商业、手工业等新兴行业出现 , 这一时期出现的地方割据现象严重威胁着国家的统一与稳定 . 因此 , 为了解决这个问题, 需要加强中央集权 ; 故选 A
Input:go\\n/**\\n * Definition for a binary tree node.\\n * type TreeNode struct {\\n * Val int\\n * Left *TreeNode\\n * Right *TreeNode\\n * }\\n */\\nfunc maxDepth(root *TreeNode) int {\\n\\tif root == nil {\\n\\t\\treturn 0\\n\\t}\\n\\tl, r := maxDepth(root.Left), maxDepth(root.Right)\\n\\treturn 1 + max(l, r)\\n}\\n\\nfunc max(a, b int) int {\\n\\tif a > b {\\n\\t\\treturn a\\n\\t}\\n\\treturn b\\n}\\n
Response: ```python
def max_depth(root): # The function takes in a single parameter `root` and returns its maximum depth value as output.
if not root or len(root.children()) == 0:
return 0
l = max_depth(root.left)
r = max_depth(root.right)
return 1 + (max(l, r))
class TreeNode:
def __init__(self, val=None, left=10, right=0):
self.val = val
self.left = None
self.right = None
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@software{AntX-7B,
title={An Enchanced Chinese Language Model based on the Chinese-LLaMA-Alpaca},
url={https://huggingface.co/AntX-ai/AntX-7B},
year={2023}
}
```
| null |
Non_BioNLP
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is an experimental product that can be used to create new LLM bassed on Chinese language.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** yjf9966
- **Model type:** LLaMA with enhanced tokenizer-size-49954
- **Language(s) (NLP):** Chinese/English
- **License:** Apache-2.0
- **Finetuned from model:** [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/AntX-ai/AntX-7B
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.
It also inherits some of the bias of its dataset model.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import LlamaForCausalLM, LlamaTokenizer
import torch
base_model_name = "AntX-ai/AntX-7B"
load_type = torch.float16
device = None
generation_config = dict(
temperature=0.2,
top_k=40,
top_p=0.9,
do_sample=True,
num_beams=1,
repetition_penalty=1.3,
max_new_tokens=400
)
prompt_input = (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n\n{instruction}\n\n### Response:\n\n"
)
if torch.cuda.is_available():
device = torch.device(0)
else:
device = torch.device('cpu')
def generate_prompt(instruction, input=None):
if input:
instruction = instruction + '\n' + input
return prompt_input.format_map({'instruction': instruction})
tokenizer = LlamaTokenizer.from_pretrained(base_model_name)
model = LlamaForCausalLM.from_pretrained(
base_model_name,
load_in_8bit=False,
torch_dtype=load_type,
low_cpu_mem_usage=True,
device_map='auto',
)
model_vocab_size = model.get_input_embeddings().weight.size(0)
tokenzier_vocab_size = len(tokenizer)
if model_vocab_size != tokenzier_vocab_size:
model.resize_token_embeddings(tokenzier_vocab_size)
raw_input_text = input("Input:")
input_text = generate_prompt(instruction=raw_input_text)
inputs = tokenizer(input_text, return_tensors="pt")
generation_output = model.generate(
input_ids=inputs["input_ids"].to(device),
attention_mask=inputs['attention_mask'].to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
**generation_config
)
s = generation_output[0]
output = tokenizer.decode(s, skip_special_tokens=True)
response = output.split("### Response:")[1].strip()
print("Response: ", response)
print("\n")
```
## Training Details
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
80% for train dataset and 20% for test dataset
#### Training Hyperparameters
- **Training regime:** fp16 mixed precision, lr=1e-4, lora_rank=8, lora_alpha=32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
#### Testing Data
<!-- This should link to a Data Card if possible. -->
20% of the BAAI/COIG-PC dataset.
```
Input:王国维说:“自周之衰,文王、周公势力之瓦解也,国民之智力成熟于内,政治之纷乱乘之于外,上无统一之制度,下迫于社会之要求,于是诸于九流各创其学说。” 他意在说明 A. 分封制的崩溃 B. 商鞅变法的作用 C. 兼并战争的后果 D. 百家争鸣的原因
Response: 本题考查对材料的理解。A错误;B正确;C和D与材料无关。故选BC两项即可
Input:经济基础是指一定社会发展阶段占统治地位的生产关系各个方面的总和。解决了地方经济问题, 也就解 决了地方割据问题, 为此, 宋太祖采取的措施是( ) A . 地方设转运使, 财赋收归中央 B . 派文臣担任各地州县的长官 C . 派文臣管理地方政事 D . 采取分化事权办法, 削弱相权
Response: A: 本题考查对宋太祖治下地方问题的认识。 A : 依据材料可知, 在北宋时期 , 由于地主阶级的发展壮大以及商业、手工业等新兴行业出现 , 这一时期出现的地方割据现象严重威胁着国家的统一与稳定 . 因此 , 为了解决这个问题, 需要加强中央集权 ; 故选 A
Input:go\\n/**\\n * Definition for a binary tree node.\\n * type TreeNode struct {\\n * Val int\\n * Left *TreeNode\\n * Right *TreeNode\\n * }\\n */\\nfunc maxDepth(root *TreeNode) int {\\n\\tif root == nil {\\n\\t\\treturn 0\\n\\t}\\n\\tl, r := maxDepth(root.Left), maxDepth(root.Right)\\n\\treturn 1 + max(l, r)\\n}\\n\\nfunc max(a, b int) int {\\n\\tif a > b {\\n\\t\\treturn a\\n\\t}\\n\\treturn b\\n}\\n
Response: ```python
def max_depth(root): # The function takes in a single parameter `root` and returns its maximum depth value as output.
if not root or len(root.children()) == 0:
return 0
l = max_depth(root.left)
r = max_depth(root.right)
return 1 + (max(l, r))
class TreeNode:
def __init__(self, val=None, left=10, right=0):
self.val = val
self.left = None
self.right = None
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@software{AntX-7B,
title={An Enchanced Chinese Language Model based on the Chinese-LLaMA-Alpaca},
url={https://huggingface.co/AntX-ai/AntX-7B},
year={2023}
}
```
|
{"datasets": ["BAAI/COIG-PC"], "language": ["zh"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation"}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,734 |
IronOne-AI-Labs/long-t5-16k-annual-report-QLoRA-fine-tuned-v1.1
|
IronOne-AI-Labs
| null |
[
"transformers",
"safetensors",
"Summarization",
"S",
"u",
"m",
"a",
"r",
"i",
"z",
"t",
"o",
"n",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | 2024-07-26T13:04:22Z |
2024-07-26T13:04:32+00:00
| 0 | 0 |
---
library_name: transformers
tags:
- Summarization
- S
- u
- m
- a
- r
- i
- z
- t
- o
- n
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["Summarization", "S", "u", "m", "a", "r", "i", "z", "t", "o", "n"]}
|
task
|
[
"SUMMARIZATION"
] | 46,735 |
Thannok1727/Prompt1727
|
Thannok1727
|
summarization
|
[
"adapter-transformers",
"summarization",
"en",
"dataset:HuggingFaceFV/finevideo",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:cc-by-nc-2.0",
"region:us"
] | 2024-10-13T19:54:24Z |
2024-10-13T19:58:27+00:00
| 0 | 0 |
---
base_model:
- meta-llama/Llama-3.2-1B
datasets:
- HuggingFaceFV/finevideo
language:
- en
library_name: adapter-transformers
license: cc-by-nc-2.0
pipeline_tag: summarization
new_version: meta-llama/Llama-3.2-11B-Vision-Instruct
---
| null |
Non_BioNLP
|
{"base_model": ["meta-llama/Llama-3.2-1B"], "datasets": ["HuggingFaceFV/finevideo"], "language": ["en"], "library_name": "adapter-transformers", "license": "cc-by-nc-2.0", "pipeline_tag": "summarization", "new_version": "meta-llama/Llama-3.2-11B-Vision-Instruct"}
|
task
|
[
"SUMMARIZATION"
] | 46,736 |
|
philschmid/openai-whisper-endpoint
|
philschmid
|
automatic-speech-recognition
|
[
"generic",
"audio",
"automatic-speech-recognition",
"endpoints-template",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2022-09-23T20:27:44Z |
2022-09-23T21:26:56+00:00
| 0 | 11 |
---
library_name: generic
license: mit
tags:
- audio
- automatic-speech-recognition
- endpoints-template
inference: false
---
# OpenAI [Whisper](https://github.com/openai/whisper) Inference Endpoint example
> Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
For more information about the model, license and limitations check the original repository at [openai/whisper](https://github.com/openai/whisper).
---
This repository implements a custom `handler` task for `automatic-speech-recognition` for 🤗 Inference Endpoints using OpenAIs new Whisper model. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
### Request
The endpoint expects a binary audio file. Below is a cURL example and a Python example using the `requests` library.
**curl**
```bash
# load audio file
wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
# run request
curl --request POST \
--url https://{ENDPOINT}/ \
--header 'Content-Type: audio/x-flac' \
--header 'Authorization: Bearer {HF_TOKEN}' \
--data-binary '@sample1.flac'
```
**Python**
```python
import json
from typing import List
import requests as r
import base64
import mimetypes
ENDPOINT_URL=""
HF_TOKEN=""
def predict(path_to_audio:str=None):
# read audio file
with open(path_to_audio, "rb") as i:
b = i.read()
# get mimetype
content_type= mimetypes.guess_type(path_to_audio)[0]
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": content_type
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_audio="sample1.flac")
prediction
```
expected output
```json
{"text": " going along slushy country roads and speaking to damp audiences in draughty school rooms day after day for a fortnight. He'll have to put in an appearance at some place of worship on Sunday morning, and he can come to us immediately afterwards."}
```
| null |
Non_BioNLP
|
# OpenAI [Whisper](https://github.com/openai/whisper) Inference Endpoint example
> Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
For more information about the model, license and limitations check the original repository at [openai/whisper](https://github.com/openai/whisper).
---
This repository implements a custom `handler` task for `automatic-speech-recognition` for 🤗 Inference Endpoints using OpenAIs new Whisper model. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
### Request
The endpoint expects a binary audio file. Below is a cURL example and a Python example using the `requests` library.
**curl**
```bash
# load audio file
wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
# run request
curl --request POST \
--url https://{ENDPOINT}/ \
--header 'Content-Type: audio/x-flac' \
--header 'Authorization: Bearer {HF_TOKEN}' \
--data-binary '@sample1.flac'
```
**Python**
```python
import json
from typing import List
import requests as r
import base64
import mimetypes
ENDPOINT_URL=""
HF_TOKEN=""
def predict(path_to_audio:str=None):
# read audio file
with open(path_to_audio, "rb") as i:
b = i.read()
# get mimetype
content_type= mimetypes.guess_type(path_to_audio)[0]
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": content_type
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_audio="sample1.flac")
prediction
```
expected output
```json
{"text": " going along slushy country roads and speaking to damp audiences in draughty school rooms day after day for a fortnight. He'll have to put in an appearance at some place of worship on Sunday morning, and he can come to us immediately afterwards."}
```
|
{"library_name": "generic", "license": "mit", "tags": ["audio", "automatic-speech-recognition", "endpoints-template"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 46,737 |
adowu/astral-256k-7b-v2
|
adowu
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"astral",
"256k",
"long",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-10T04:16:50Z |
2024-04-10T04:59:02+00:00
| 10 | 0 |
---
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- astral
- 256k
- long
- mistral
---
### ASTRAL-256k-7b-v2
The adowu/astral-256k-7b-v2 is a cutting-edge language model developed on the MistralForCausalLM architecture, designed for advanced causal language modeling tasks. This model stands out for its ability to understand and generate text with remarkable depth and context awareness, making it highly effective for a wide range of natural language processing (NLP) applications.
## Key Features
- Advanced Architecture: Utilizes the MistralForCausalLM framework, enabling efficient and effective text processing and generation.
- Large Model Scale: Equipped with a substantial model size, it captures and processes a vast amount of information, enhancing its understanding and generation capabilities.
- Extended Sequence Handling: Capable of managing exceptionally long sequences, this model excels in tasks requiring extensive contextual information.
## Performance and Efficiency
Optimized for high performance, the model employs techniques to balance computational efficiency with output precision. This optimization ensures it can be deployed effectively across various platforms, including those supporting bfloat16 computations, without significant loss in the quality of generated text.
## Application Potential
The model's sophisticated understanding and text generation capabilities make it ideal for several advanced applications:
- Content Generation: From articles and reports to creative writing, it can produce coherent and contextually rich content.
- Conversational Systems: Powers chatbots and virtual assistants, facilitating deep and meaningful interactions over extended conversations.
- Complex Language Understanding Tasks: Excellently performs in summarization, translation, and other tasks over large documents, showcasing its ability to handle detailed and nuanced language understanding.
- **Developed by:** aww
- **Model type:** Mistral
| null |
Non_BioNLP
|
### ASTRAL-256k-7b-v2
The adowu/astral-256k-7b-v2 is a cutting-edge language model developed on the MistralForCausalLM architecture, designed for advanced causal language modeling tasks. This model stands out for its ability to understand and generate text with remarkable depth and context awareness, making it highly effective for a wide range of natural language processing (NLP) applications.
## Key Features
- Advanced Architecture: Utilizes the MistralForCausalLM framework, enabling efficient and effective text processing and generation.
- Large Model Scale: Equipped with a substantial model size, it captures and processes a vast amount of information, enhancing its understanding and generation capabilities.
- Extended Sequence Handling: Capable of managing exceptionally long sequences, this model excels in tasks requiring extensive contextual information.
## Performance and Efficiency
Optimized for high performance, the model employs techniques to balance computational efficiency with output precision. This optimization ensures it can be deployed effectively across various platforms, including those supporting bfloat16 computations, without significant loss in the quality of generated text.
## Application Potential
The model's sophisticated understanding and text generation capabilities make it ideal for several advanced applications:
- Content Generation: From articles and reports to creative writing, it can produce coherent and contextually rich content.
- Conversational Systems: Powers chatbots and virtual assistants, facilitating deep and meaningful interactions over extended conversations.
- Complex Language Understanding Tasks: Excellently performs in summarization, translation, and other tasks over large documents, showcasing its ability to handle detailed and nuanced language understanding.
- **Developed by:** aww
- **Model type:** Mistral
|
{"language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["astral", "256k", "long", "mistral"]}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 46,738 |
srimoyee12/my_awesome_model
|
srimoyee12
|
text-classification
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-29T03:37:19Z |
2023-04-03T03:46:19+00:00
| 15 | 0 |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: srimoyee12/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# srimoyee12/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [Auditor Review Dataset](https://huggingface.co/datasets/demo-org/auditor_review).
It achieves the following results on the evaluation set:
- Train Loss: 0.1735
- Validation Loss: 0.3834
- Train Accuracy: 0.8524
- Epoch: 3
## Model description
This is a simple classifier model based on DistilBERT. It classifies given data into Negative, Neutral or Positive based on the sentiment.
## Intended uses & limitations
Can be used for text classification.
This is created for illustration purposes and might not have the highest accuracy.
## Training and evaluation data
Default split from the [dataset card](https://huggingface.co/datasets/demo-org/auditor_review)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1210, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5919 | 0.4004 | 0.8359 | 0 |
| 0.2881 | 0.3590 | 0.8473 | 1 |
| 0.1735 | 0.3834 | 0.8524 | 2 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# srimoyee12/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [Auditor Review Dataset](https://huggingface.co/datasets/demo-org/auditor_review).
It achieves the following results on the evaluation set:
- Train Loss: 0.1735
- Validation Loss: 0.3834
- Train Accuracy: 0.8524
- Epoch: 3
## Model description
This is a simple classifier model based on DistilBERT. It classifies given data into Negative, Neutral or Positive based on the sentiment.
## Intended uses & limitations
Can be used for text classification.
This is created for illustration purposes and might not have the highest accuracy.
## Training and evaluation data
Default split from the [dataset card](https://huggingface.co/datasets/demo-org/auditor_review)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1210, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5919 | 0.4004 | 0.8359 | 0 |
| 0.2881 | 0.3590 | 0.8473 | 1 |
| 0.1735 | 0.3834 | 0.8524 | 2 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "srimoyee12/my_awesome_model", "results": []}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,739 |
Apurva1205/Translation-Multi-Model-Belarusian-English
|
Apurva1205
| null |
[
"region:us"
] | 2024-12-11T00:33:21Z |
2024-12-11T00:39:38+00:00
| 0 | 0 |
---
metrics:
- accuracy
---
This repository hosts a comprehensive translation system integrating three different neural network architectures: RNN, LSTM, and GPT-2, designed for Belarusian-to-English translation tasks. The models demonstrate comparative performance and illustrate the strengths and limitations of each approach for sequence-to-sequence translation.

Model Details
1. RNN (Recurrent Neural Network)
Architecture: Basic RNN with embedding and dense layers.
Training Data: Belarusian-to-English sentence pairs.
Performance:
Train Loss: 1.9451 (Epoch 1) to 8.2370 (Epoch 10)
Validation Loss: 0.8365 (Epoch 1) to 0.1952 (Epoch 10)
Accuracy: 95.92%
2. LSTM (Long Short-Term Memory)
Architecture: LSTM-based encoder-decoder with attention.
Training Data: Same dataset as RNN.
Performance:
Train Loss: 1.5918 (Epoch 1) to 0.0003 (Epoch 10)
Validation Loss: 1.2702 (Epoch 1) to 0.5693 (Epoch 10)
Accuracy: 95.92%
3. GPT-2 (Generative Pre-trained Transformer 2)
Architecture: Fine-tuned GPT-2 model for translation tasks.
Training Data: Belarusian-to-English formatted sentences.
Performance:
Training Steps: 50 steps
Loss: 1.3228 (Step 1) to 0.8987 (Step 50)
| null |
Non_BioNLP
|
This repository hosts a comprehensive translation system integrating three different neural network architectures: RNN, LSTM, and GPT-2, designed for Belarusian-to-English translation tasks. The models demonstrate comparative performance and illustrate the strengths and limitations of each approach for sequence-to-sequence translation.

Model Details
1. RNN (Recurrent Neural Network)
Architecture: Basic RNN with embedding and dense layers.
Training Data: Belarusian-to-English sentence pairs.
Performance:
Train Loss: 1.9451 (Epoch 1) to 8.2370 (Epoch 10)
Validation Loss: 0.8365 (Epoch 1) to 0.1952 (Epoch 10)
Accuracy: 95.92%
2. LSTM (Long Short-Term Memory)
Architecture: LSTM-based encoder-decoder with attention.
Training Data: Same dataset as RNN.
Performance:
Train Loss: 1.5918 (Epoch 1) to 0.0003 (Epoch 10)
Validation Loss: 1.2702 (Epoch 1) to 0.5693 (Epoch 10)
Accuracy: 95.92%
3. GPT-2 (Generative Pre-trained Transformer 2)
Architecture: Fine-tuned GPT-2 model for translation tasks.
Training Data: Belarusian-to-English formatted sentences.
Performance:
Training Steps: 50 steps
Loss: 1.3228 (Step 1) to 0.8987 (Step 50)
|
{"metrics": ["accuracy"]}
|
task
|
[
"TRANSLATION"
] | 46,740 |
gsarti/mt5-small-news-summarization
|
gsarti
|
summarization
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"fanpage",
"ilpost",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-03-09T07:52:27+00:00
| 147 | 0 |
---
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
language:
- it
license: apache-2.0
metrics:
- rouge
tags:
- italian
- sequence-to-sequence
- fanpage
- ilpost
- summarization
widget:
- text: 'Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista
di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo
fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette,
che è stata recentemente protagonista di una dedica di Supermario, non ha ancora
intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia
ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al
mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere
seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata
di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere
la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è
sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente
radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani.
È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché
no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia,
quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita
privata quando poi dovrebbero interessarsi di più di quello che fa sul campo.
Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate
cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi
puntati addosso: più per la sua vita privata che come giocatore. Per me può anche
andare in uno strip club, se non fa niente di male, con gli amici, però devo dire
che alla fine torna sempre da me, sono la sua preferita.'
- text: 'Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno
ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo
talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato
in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per
tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato
eliminato. Ma non è detta l''ultima parola: il duetto di questa sera con Alessandra
Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla
giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo
so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento
ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali
sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara
a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta
serate nel tour estivo, poi promozione del secondo disco. Una bella palestra.
Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico
trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui.'
- text: L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori
al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società
degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori
Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per
il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo
di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130
miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse
essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia
del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la
sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti
con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque
opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello
dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni
sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione.
- text: Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da
quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini
ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire
a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi
sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri
precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito,
contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli
teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore
e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti
di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay,
siano invece disponibili gratuitamente.
co2_eq_emissions:
emissions: 17g
source: Google Cloud Platform Carbon Footprint
training_type: fine-tuning
geographical_location: Eemshaven, Netherlands, Europe
hardware_used: 1 TPU v3-8 VM
thumbnail: https://gsarti.com/publication/it5/featured.png
model-index:
- name: mt5-small-news-summarization
results:
- task:
type: news-summarization
name: News Summarization
dataset:
name: NewsSum-IT
type: newssum-it
metrics:
- type: rouge1
value: 0.32
name: Test Rouge1 IlPost
- type: rouge2
value: 0.154
name: Test Rouge2 IlPost
- type: rougeL
value: 0.26
name: Test RougeL IlPost
- type: bertscore
value: 0.38
name: Test BERTScore IlPost
args:
- model_type: dbmdz/bert-base-italian-xxl-uncased
- lang: it
- num_layers: 10
- rescale_with_baseline: true
- baseline_path: bertscore_baseline_ita.tsv
- type: rouge1
value: 0.326
name: Test Rouge1 Fanpage
- type: rouge2
value: 0.145
name: Test Rouge2 Fanpage
- type: rougeL
value: 0.236
name: Test RougeL Fanpage
- type: bertscore
value: 0.37
name: Test BERTScore Fanpage
args:
- model_type: dbmdz/bert-base-italian-xxl-uncased
- lang: it
- num_layers: 10
- rescale_with_baseline: true
- baseline_path: bertscore_baseline_ita.tsv
---
# mT5 Small for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/mt5-small-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
| null |
Non_BioNLP
|
# mT5 Small for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/mt5-small-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
{"datasets": ["ARTeLab/fanpage", "ARTeLab/ilpost"], "language": ["it"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["italian", "sequence-to-sequence", "fanpage", "ilpost", "summarization"], "widget": [{"text": "Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette, che è stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani. È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di più di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: più per la sua vita privata che come giocatore. Per me può anche andare in uno strip club, se non fa niente di male, con gli amici, però devo dire che alla fine torna sempre da me, sono la sua preferita."}, {"text": "Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato eliminato. Ma non è detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui."}, {"text": "L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione."}, {"text": "Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."}], "co2_eq_emissions": {"emissions": "17g", "source": "Google Cloud Platform Carbon Footprint", "training_type": "fine-tuning", "geographical_location": "Eemshaven, Netherlands, Europe", "hardware_used": "1 TPU v3-8 VM"}, "thumbnail": "https://gsarti.com/publication/it5/featured.png", "model-index": [{"name": "mt5-small-news-summarization", "results": [{"task": {"type": "news-summarization", "name": "News Summarization"}, "dataset": {"name": "NewsSum-IT", "type": "newssum-it"}, "metrics": [{"type": "rouge1", "value": 0.32, "name": "Test Rouge1 IlPost"}, {"type": "rouge2", "value": 0.154, "name": "Test Rouge2 IlPost"}, {"type": "rougeL", "value": 0.26, "name": "Test RougeL IlPost"}, {"type": "bertscore", "value": 0.38, "name": "Test BERTScore IlPost", "args": [{"model_type": "dbmdz/bert-base-italian-xxl-uncased"}, {"lang": "it"}, {"num_layers": 10}, {"rescale_with_baseline": true}, {"baseline_path": "bertscore_baseline_ita.tsv"}]}, {"type": "rouge1", "value": 0.326, "name": "Test Rouge1 Fanpage"}, {"type": "rouge2", "value": 0.145, "name": "Test Rouge2 Fanpage"}, {"type": "rougeL", "value": 0.236, "name": "Test RougeL Fanpage"}, {"type": "bertscore", "value": 0.37, "name": "Test BERTScore Fanpage", "args": [{"model_type": "dbmdz/bert-base-italian-xxl-uncased"}, {"lang": "it"}, {"num_layers": 10}, {"rescale_with_baseline": true}, {"baseline_path": "bertscore_baseline_ita.tsv"}]}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 46,741 |
aguinrodriguezj/finetuning-sentiment-model-3000-samples
|
aguinrodriguezj
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-21T12:00:56Z |
2023-11-21T12:11:40+00:00
| 7 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.8633333333333333
name: Accuracy
- type: f1
value: 0.8655737704918034
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3392
- Accuracy: 0.8633
- F1: 0.8656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
TBD
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3392
- Accuracy: 0.8633
- F1: 0.8656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "distilbert-base-uncased", "datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.8633333333333333, "name": "Accuracy"}, {"type": "f1", "value": 0.8655737704918034, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,742 |
utter-project/EuroLLM-1.7B-Instruct
|
utter-project
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"de",
"es",
"fr",
"it",
"pt",
"pl",
"nl",
"tr",
"sv",
"cs",
"el",
"hu",
"ro",
"fi",
"uk",
"sl",
"sk",
"da",
"lt",
"lv",
"et",
"bg",
"no",
"ca",
"hr",
"ga",
"mt",
"gl",
"zh",
"ru",
"ko",
"ja",
"ar",
"hi",
"arxiv:2409.16235",
"base_model:utter-project/EuroLLM-1.7B",
"base_model:finetune:utter-project/EuroLLM-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-06T09:35:51Z |
2024-12-16T12:46:04+00:00
| 12,433 | 70 |
---
base_model:
- utter-project/EuroLLM-1.7B
language:
- en
- de
- es
- fr
- it
- pt
- pl
- nl
- tr
- sv
- cs
- el
- hu
- ro
- fi
- uk
- sl
- sk
- da
- lt
- lv
- et
- bg
- 'no'
- ca
- hr
- ga
- mt
- gl
- zh
- ru
- ko
- ja
- ar
- hi
library_name: transformers
license: apache-2.0
---
## *Model updated on September 24*
# Model Card for EuroLLM-1.7B-Instruct
This is the model card for the first instruction tuned model of the EuroLLM series: EuroLLM-1.7B-Instruct. You can also check the pre-trained version: [EuroLLM-1.7B](https://huggingface.co/utter-project/EuroLLM-1.7B).
- **Developed by:** Unbabel, Instituto Superior Técnico, Instituto de Telecomunicações, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université.
- **Funded by:** European Union.
- **Model type:** A 1.7B parameter instruction tuned multilingual transfomer LLM.
- **Language(s) (NLP):** Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian.
- **License:** Apache License 2.0.
## Model Details
The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages.
EuroLLM-1.7B is a 1.7B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets.
EuroLLM-1.7B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation.
### Model Description
EuroLLM uses a standard, dense Transformer architecture:
- We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance.
- We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
- We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
- We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length.
For pre-training, we use 256 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 3,072 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision.
Here is a summary of the model hyper-parameters:
| | |
|--------------------------------------|----------------------|
| Sequence Length | 4,096 |
| Number of Layers | 24 |
| Embedding Size | 2,048 |
| FFN Hidden Size | 5,632 |
| Number of Heads | 16 |
| Number of KV Heads (GQA) | 8 |
| Activation Function | SwiGLU |
| Position Encodings | RoPE (\Theta=10,000) |
| Layer Norm | RMSNorm |
| Tied Embeddings | No |
| Embedding Parameters | 0.262B |
| LM Head Parameters | 0.262B |
| Non-embedding Parameters | 1.133B |
| Total Parameters | 1.657B |
## Run the model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "utter-project/EuroLLM-1.7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = '<|im_start|>system\n<|im_end|>\n<|im_start|>user\nTranslate the following English source text to Portuguese:\nEnglish: I am a language model for european languages. \nPortuguese: <|im_end|>\n<|im_start|>assistant\n'
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
## Results
### Machine Translation
We evaluate EuroLLM-1.7B-Instruct on several machine translation benchmarks: FLORES-200, WMT-23, and WMT-24 comparing it with [Gemma-2B](https://huggingface.co/google/gemma-2b) and [Gemma-7B](https://huggingface.co/google/gemma-7b) (also instruction tuned on EuroBlocks).
The results show that EuroLLM-1.7B is substantially better than Gemma-2B in Machine Translation and competitive with Gemma-7B.
#### Flores-200
| Model | AVG | AVG en-xx | AVG xx-en | en-ar | en-bg | en-ca | en-cs | en-da | en-de | en-el | en-es-latam | en-et | en-fi | en-fr | en-ga | en-gl | en-hi | en-hr | en-hu | en-it | en-ja | en-ko | en-lt | en-lv | en-mt | en-nl | en-no | en-pl | en-pt-br | en-ro | en-ru | en-sk | en-sl | en-sv | en-tr | en-uk | en-zh-cn | ar-en | bg-en | ca-en | cs-en | da-en | de-en | el-en | es-latam-en | et-en | fi-en | fr-en | ga-en | gl-en | hi-en | hr-en | hu-en | it-en | ja-en | ko-en | lt-en | lv-en | mt-en | nl-en | no-en | pl-en | pt-br-en | ro-en | ru-en | sk-en | sl-en | sv-en | tr-en | uk-en | zh-cn-en |
|--------------------------------|------|-----------|-----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|
| EuroLLM-1.7B-Instruct |86.89 | 86.53 | 87.25 | 85.17 | 89.42 | 84.72 | 89.13 | 89.47 | 86.90 | 87.60 | 86.29 | 88.95 | 89.40 | 87.69 | 74.89 | 86.41 | 76.92 | 84.79 | 86.78 | 88.17 | 89.76 | 87.70 | 87.27 | 87.62 | 67.84 | 87.10 | 90.00 | 88.18 | 89.29 | 89.49 | 88.32 | 88.18 | 86.85 | 90.00 | 87.31 | 87.89 | 86.60 | 86.34 | 87.45 | 87.57 | 87.95 | 89.72 | 88.80 | 87.00 | 86.77 | 88.34 | 89.09 | 88.95 | 82.69 | 87.80 | 88.37 | 86.71 | 87.20 | 87.81 | 86.79 | 86.79 | 85.62 | 86.48 | 81.10 | 86.97 | 90.25 | 85.75 | 89.20 | 88.88 | 86.00 | 87.38 | 86.76 | 89.61 | 87.94 |
| Gemma-2B-EuroBlocks | 81.59 | 78.97 | 84.21 | 76.68 | 82.73 | 83.14 | 81.63 | 84.63 | 83.15 | 79.42 | 84.05 | 72.58 | 79.73 | 84.97 | 40.50 | 82.13 | 67.79 | 80.53 | 78.36 | 84.90 | 87.43 | 82.98 | 72.29 | 68.68 | 58.55 | 83.13 | 86.15 | 82.78 | 86.79 | 83.14 | 84.61 | 78.18 | 75.37 | 80.89 | 78.38 | 84.38 | 84.35 | 83.88 | 85.77 | 86.85 | 86.31 | 88.24 | 88.12 | 84.79 | 84.90 | 82.51 | 86.32 | 88.29 | 54.78 | 86.53 | 85.83 | 85.41 | 85.18 | 86.77 | 85.78 | 84.99 | 81.65 | 81.78 | 67.27 | 85.92 | 89.07 | 84.14 | 88.07 | 87.17 | 85.23 | 85.09 | 83.95 | 87.57 | 84.77 |
| Gemma-7B-EuroBlocks |85.27 | 83.90 | 86.64 | 86.38 | 87.87 | 85.74 | 84.25 | 85.69 | 81.49 | 85.52 | 86.93 | 62.83 | 84.96 | 75.34 | 84.93 | 83.91 | 86.92 | 88.19 | 86.11 | 81.73 | 80.55 | 66.85 | 85.31 | 89.36 | 85.87 | 88.62 | 88.06 | 86.67 | 84.79 | 82.71 | 86.45 | 85.19 | 86.67 | 85.77 | 86.36 | 87.21 | 88.09 | 87.17 | 89.40 | 88.26 | 86.74 | 86.73 | 87.25 | 88.87 | 88.81 | 72.45 | 87.62 | 87.86 | 87.08 | 87.01 | 87.58 | 86.92 | 86.70 | 85.10 | 85.74 | 77.81 | 86.83 | 90.40 | 85.41 | 89.04 | 88.77 | 86.13 | 86.67 | 86.32 | 89.27 | 87.92 |
#### WMT-23
| Model | AVG | AVG en-xx | AVG xx-en | AVG xx-xx | en-de | en-cs | en-uk | en-ru | en-zh-cn | de-en | uk-en | ru-en | zh-cn-en | cs-uk |
|--------------------------------|------|-----------|-----------|-----------|-------|-------|-------|-------|----------|-------|-------|-------|----------|-------|
| EuroLLM-1.7B-Instruct | 82.91 | 83.20 | 81.77 | 86.82 | 81.56 | 85.23 | 81.30 | 82.47 | 83.61 | 85.03 | 84.06 | 85.25 | 81.31 | 78.83 | 79.42 | 86.82 |
| Gemma-2B-EuroBlocks | 79.96 | 79.01 | 80.86 | 81.15 | 76.82 | 76.05 | 77.92 | 78.98 | 81.58 | 82.73 | 82.71 | 83.99 | 80.35 | 78.27 | 78.99 | 81.15 |
| Gemma-7B-EuroBlocks | 82.76 | 82.26 | 82.70 | 85.98 | 81.37 | 82.42 | 81.54 | 82.18 | 82.90 | 83.17 | 84.29 | 85.70 | 82.46 | 79.73 | 81.33 | 85.98 |
#### WMT-24
| Model | AVG | AVG en-xx | AVG xx-xx | en-de | en-es-latam | en-cs | en-ru | en-uk | en-ja | en-zh-cn | en-hi | cs-uk | ja-zh-cn |
|---------|------|------|-------|----|---|-------|-------|--------|--------|-------|-------|-------|-----|
| EuroLLM-1.7B-Instruct|79.32 | 79.32 | 79.34 | 79.42 | 80.67 | 80.55 | 78.65 | 80.12 | 82.96 | 80.60 | 71.59 | 83.48 | 75.20 |
|Gemma-2B-EuroBlocks| 74.72 | 74.41 | 75.97 | 74.93 | 78.81 | 70.54 | 74.90 | 75.84 | 79.48 | 78.06 | 62.70 | 79.87 | 72.07 |
|Gemma-7B-EuroBlocks| 78.67 | 78.34 | 80.00 | 78.88 | 80.47 | 78.55 | 78.55 | 80.12 | 80.55 | 78.90 | 70.71 | 84.33 | 75.66 |
### General Benchmarks
We also compare EuroLLM-1.7B with [TinyLlama-v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) and [Gemma-2B](https://huggingface.co/google/gemma-2b) on 3 general benchmarks: Arc Challenge and Hellaswag.
For the non-english languages we use the [Okapi](https://aclanthology.org/2023.emnlp-demo.28.pdf) datasets.
Results show that EuroLLM-1.7B is superior to TinyLlama-v1.1 and similar to Gemma-2B on Hellaswag but worse on Arc Challenge. This can be due to the lower number of parameters of EuroLLM-1.7B (1.133B non-embedding parameters against 1.981B).
#### Arc Challenge
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Chinese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|---------|-------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.3496 | 0.4061 | 0.3464 | 0.3684 | 0.3627 | 0.3738 | 0.3855 | 0.3521 | 0.3208 | 0.3507 | 0.3045 | 0.3605 | 0.2928 | 0.3271 | 0.3488 | 0.3516 | 0.3513 | 0.3396 |
| TinyLlama-v1.1 | 0.2650 | 0.3712 | 0.2524 | 0.2795 | 0.2883 | 0.2652 | 0.2906 | 0.2410 | 0.2669 | 0.2404 | 0.2310 | 0.2687 | 0.2354 | 0.2449 | 0.2476 | 0.2524 | 0.2494 | 0.2796 |
| Gemma-2B | 0.3617 | 0.4846 | 0.3755 | 0.3940 | 0.4080 | 0.3687 | 0.3872 | 0.3726 | 0.3456 | 0.3328 | 0.3122 | 0.3519 | 0.2851 | 0.3039 | 0.3590 | 0.3601 | 0.3565 | 0.3516 |
#### Hellaswag
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|--------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.4744 | 0.4760 | 0.6057 | 0.4793 | 0.5337 | 0.5298 | 0.5085 | 0.5224 | 0.4654 | 0.4949 | 0.4104 | 0.4800 | 0.3655 | 0.4097 | 0.4606 | 0.436 | 0.4702 | 0.4445 |
| TinyLlama-v1.1 |0.3674 | 0.6248 | 0.3650 | 0.4137 | 0.4010 | 0.3780 | 0.3892 | 0.3494 | 0.3588 | 0.2880 | 0.3561 | 0.2841 | 0.3073 | 0.3267 | 0.3349 | 0.3408 | 0.3613 |
| Gemma-2B |0.4666 | 0.7165 | 0.4756 | 0.5414 | 0.5180 | 0.4841 | 0.5081 | 0.4664 | 0.4655 | 0.3868 | 0.4383 | 0.3413 | 0.3710 | 0.4316 | 0.4291 | 0.4471 | 0.4448 |
## Bias, Risks, and Limitations
EuroLLM-1.7B-Instruct has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Paper
Paper: [EuroLLM: Multilingual Language Models for Europe](https://huggingface.co/papers/2409.16235)
| null |
Non_BioNLP
|
## *Model updated on September 24*
# Model Card for EuroLLM-1.7B-Instruct
This is the model card for the first instruction tuned model of the EuroLLM series: EuroLLM-1.7B-Instruct. You can also check the pre-trained version: [EuroLLM-1.7B](https://huggingface.co/utter-project/EuroLLM-1.7B).
- **Developed by:** Unbabel, Instituto Superior Técnico, Instituto de Telecomunicações, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université.
- **Funded by:** European Union.
- **Model type:** A 1.7B parameter instruction tuned multilingual transfomer LLM.
- **Language(s) (NLP):** Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian.
- **License:** Apache License 2.0.
## Model Details
The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages.
EuroLLM-1.7B is a 1.7B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets.
EuroLLM-1.7B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation.
### Model Description
EuroLLM uses a standard, dense Transformer architecture:
- We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance.
- We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
- We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
- We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length.
For pre-training, we use 256 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 3,072 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision.
Here is a summary of the model hyper-parameters:
| | |
|--------------------------------------|----------------------|
| Sequence Length | 4,096 |
| Number of Layers | 24 |
| Embedding Size | 2,048 |
| FFN Hidden Size | 5,632 |
| Number of Heads | 16 |
| Number of KV Heads (GQA) | 8 |
| Activation Function | SwiGLU |
| Position Encodings | RoPE (\Theta=10,000) |
| Layer Norm | RMSNorm |
| Tied Embeddings | No |
| Embedding Parameters | 0.262B |
| LM Head Parameters | 0.262B |
| Non-embedding Parameters | 1.133B |
| Total Parameters | 1.657B |
## Run the model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "utter-project/EuroLLM-1.7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = '<|im_start|>system\n<|im_end|>\n<|im_start|>user\nTranslate the following English source text to Portuguese:\nEnglish: I am a language model for european languages. \nPortuguese: <|im_end|>\n<|im_start|>assistant\n'
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
## Results
### Machine Translation
We evaluate EuroLLM-1.7B-Instruct on several machine translation benchmarks: FLORES-200, WMT-23, and WMT-24 comparing it with [Gemma-2B](https://huggingface.co/google/gemma-2b) and [Gemma-7B](https://huggingface.co/google/gemma-7b) (also instruction tuned on EuroBlocks).
The results show that EuroLLM-1.7B is substantially better than Gemma-2B in Machine Translation and competitive with Gemma-7B.
#### Flores-200
| Model | AVG | AVG en-xx | AVG xx-en | en-ar | en-bg | en-ca | en-cs | en-da | en-de | en-el | en-es-latam | en-et | en-fi | en-fr | en-ga | en-gl | en-hi | en-hr | en-hu | en-it | en-ja | en-ko | en-lt | en-lv | en-mt | en-nl | en-no | en-pl | en-pt-br | en-ro | en-ru | en-sk | en-sl | en-sv | en-tr | en-uk | en-zh-cn | ar-en | bg-en | ca-en | cs-en | da-en | de-en | el-en | es-latam-en | et-en | fi-en | fr-en | ga-en | gl-en | hi-en | hr-en | hu-en | it-en | ja-en | ko-en | lt-en | lv-en | mt-en | nl-en | no-en | pl-en | pt-br-en | ro-en | ru-en | sk-en | sl-en | sv-en | tr-en | uk-en | zh-cn-en |
|--------------------------------|------|-----------|-----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|
| EuroLLM-1.7B-Instruct |86.89 | 86.53 | 87.25 | 85.17 | 89.42 | 84.72 | 89.13 | 89.47 | 86.90 | 87.60 | 86.29 | 88.95 | 89.40 | 87.69 | 74.89 | 86.41 | 76.92 | 84.79 | 86.78 | 88.17 | 89.76 | 87.70 | 87.27 | 87.62 | 67.84 | 87.10 | 90.00 | 88.18 | 89.29 | 89.49 | 88.32 | 88.18 | 86.85 | 90.00 | 87.31 | 87.89 | 86.60 | 86.34 | 87.45 | 87.57 | 87.95 | 89.72 | 88.80 | 87.00 | 86.77 | 88.34 | 89.09 | 88.95 | 82.69 | 87.80 | 88.37 | 86.71 | 87.20 | 87.81 | 86.79 | 86.79 | 85.62 | 86.48 | 81.10 | 86.97 | 90.25 | 85.75 | 89.20 | 88.88 | 86.00 | 87.38 | 86.76 | 89.61 | 87.94 |
| Gemma-2B-EuroBlocks | 81.59 | 78.97 | 84.21 | 76.68 | 82.73 | 83.14 | 81.63 | 84.63 | 83.15 | 79.42 | 84.05 | 72.58 | 79.73 | 84.97 | 40.50 | 82.13 | 67.79 | 80.53 | 78.36 | 84.90 | 87.43 | 82.98 | 72.29 | 68.68 | 58.55 | 83.13 | 86.15 | 82.78 | 86.79 | 83.14 | 84.61 | 78.18 | 75.37 | 80.89 | 78.38 | 84.38 | 84.35 | 83.88 | 85.77 | 86.85 | 86.31 | 88.24 | 88.12 | 84.79 | 84.90 | 82.51 | 86.32 | 88.29 | 54.78 | 86.53 | 85.83 | 85.41 | 85.18 | 86.77 | 85.78 | 84.99 | 81.65 | 81.78 | 67.27 | 85.92 | 89.07 | 84.14 | 88.07 | 87.17 | 85.23 | 85.09 | 83.95 | 87.57 | 84.77 |
| Gemma-7B-EuroBlocks |85.27 | 83.90 | 86.64 | 86.38 | 87.87 | 85.74 | 84.25 | 85.69 | 81.49 | 85.52 | 86.93 | 62.83 | 84.96 | 75.34 | 84.93 | 83.91 | 86.92 | 88.19 | 86.11 | 81.73 | 80.55 | 66.85 | 85.31 | 89.36 | 85.87 | 88.62 | 88.06 | 86.67 | 84.79 | 82.71 | 86.45 | 85.19 | 86.67 | 85.77 | 86.36 | 87.21 | 88.09 | 87.17 | 89.40 | 88.26 | 86.74 | 86.73 | 87.25 | 88.87 | 88.81 | 72.45 | 87.62 | 87.86 | 87.08 | 87.01 | 87.58 | 86.92 | 86.70 | 85.10 | 85.74 | 77.81 | 86.83 | 90.40 | 85.41 | 89.04 | 88.77 | 86.13 | 86.67 | 86.32 | 89.27 | 87.92 |
#### WMT-23
| Model | AVG | AVG en-xx | AVG xx-en | AVG xx-xx | en-de | en-cs | en-uk | en-ru | en-zh-cn | de-en | uk-en | ru-en | zh-cn-en | cs-uk |
|--------------------------------|------|-----------|-----------|-----------|-------|-------|-------|-------|----------|-------|-------|-------|----------|-------|
| EuroLLM-1.7B-Instruct | 82.91 | 83.20 | 81.77 | 86.82 | 81.56 | 85.23 | 81.30 | 82.47 | 83.61 | 85.03 | 84.06 | 85.25 | 81.31 | 78.83 | 79.42 | 86.82 |
| Gemma-2B-EuroBlocks | 79.96 | 79.01 | 80.86 | 81.15 | 76.82 | 76.05 | 77.92 | 78.98 | 81.58 | 82.73 | 82.71 | 83.99 | 80.35 | 78.27 | 78.99 | 81.15 |
| Gemma-7B-EuroBlocks | 82.76 | 82.26 | 82.70 | 85.98 | 81.37 | 82.42 | 81.54 | 82.18 | 82.90 | 83.17 | 84.29 | 85.70 | 82.46 | 79.73 | 81.33 | 85.98 |
#### WMT-24
| Model | AVG | AVG en-xx | AVG xx-xx | en-de | en-es-latam | en-cs | en-ru | en-uk | en-ja | en-zh-cn | en-hi | cs-uk | ja-zh-cn |
|---------|------|------|-------|----|---|-------|-------|--------|--------|-------|-------|-------|-----|
| EuroLLM-1.7B-Instruct|79.32 | 79.32 | 79.34 | 79.42 | 80.67 | 80.55 | 78.65 | 80.12 | 82.96 | 80.60 | 71.59 | 83.48 | 75.20 |
|Gemma-2B-EuroBlocks| 74.72 | 74.41 | 75.97 | 74.93 | 78.81 | 70.54 | 74.90 | 75.84 | 79.48 | 78.06 | 62.70 | 79.87 | 72.07 |
|Gemma-7B-EuroBlocks| 78.67 | 78.34 | 80.00 | 78.88 | 80.47 | 78.55 | 78.55 | 80.12 | 80.55 | 78.90 | 70.71 | 84.33 | 75.66 |
### General Benchmarks
We also compare EuroLLM-1.7B with [TinyLlama-v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) and [Gemma-2B](https://huggingface.co/google/gemma-2b) on 3 general benchmarks: Arc Challenge and Hellaswag.
For the non-english languages we use the [Okapi](https://aclanthology.org/2023.emnlp-demo.28.pdf) datasets.
Results show that EuroLLM-1.7B is superior to TinyLlama-v1.1 and similar to Gemma-2B on Hellaswag but worse on Arc Challenge. This can be due to the lower number of parameters of EuroLLM-1.7B (1.133B non-embedding parameters against 1.981B).
#### Arc Challenge
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Chinese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|---------|-------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.3496 | 0.4061 | 0.3464 | 0.3684 | 0.3627 | 0.3738 | 0.3855 | 0.3521 | 0.3208 | 0.3507 | 0.3045 | 0.3605 | 0.2928 | 0.3271 | 0.3488 | 0.3516 | 0.3513 | 0.3396 |
| TinyLlama-v1.1 | 0.2650 | 0.3712 | 0.2524 | 0.2795 | 0.2883 | 0.2652 | 0.2906 | 0.2410 | 0.2669 | 0.2404 | 0.2310 | 0.2687 | 0.2354 | 0.2449 | 0.2476 | 0.2524 | 0.2494 | 0.2796 |
| Gemma-2B | 0.3617 | 0.4846 | 0.3755 | 0.3940 | 0.4080 | 0.3687 | 0.3872 | 0.3726 | 0.3456 | 0.3328 | 0.3122 | 0.3519 | 0.2851 | 0.3039 | 0.3590 | 0.3601 | 0.3565 | 0.3516 |
#### Hellaswag
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|--------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.4744 | 0.4760 | 0.6057 | 0.4793 | 0.5337 | 0.5298 | 0.5085 | 0.5224 | 0.4654 | 0.4949 | 0.4104 | 0.4800 | 0.3655 | 0.4097 | 0.4606 | 0.436 | 0.4702 | 0.4445 |
| TinyLlama-v1.1 |0.3674 | 0.6248 | 0.3650 | 0.4137 | 0.4010 | 0.3780 | 0.3892 | 0.3494 | 0.3588 | 0.2880 | 0.3561 | 0.2841 | 0.3073 | 0.3267 | 0.3349 | 0.3408 | 0.3613 |
| Gemma-2B |0.4666 | 0.7165 | 0.4756 | 0.5414 | 0.5180 | 0.4841 | 0.5081 | 0.4664 | 0.4655 | 0.3868 | 0.4383 | 0.3413 | 0.3710 | 0.4316 | 0.4291 | 0.4471 | 0.4448 |
## Bias, Risks, and Limitations
EuroLLM-1.7B-Instruct has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Paper
Paper: [EuroLLM: Multilingual Language Models for Europe](https://huggingface.co/papers/2409.16235)
|
{"base_model": ["utter-project/EuroLLM-1.7B"], "language": ["en", "de", "es", "fr", "it", "pt", "pl", "nl", "tr", "sv", "cs", "el", "hu", "ro", "fi", "uk", "sl", "sk", "da", "lt", "lv", "et", "bg", "no", "ca", "hr", "ga", "mt", "gl", "zh", "ru", "ko", "ja", "ar", "hi"], "library_name": "transformers", "license": "apache-2.0"}
|
task
|
[
"TRANSLATION"
] | 46,743 |
poltextlab/xlm-roberta-large-english-party-cap-v3
|
poltextlab
|
text-classification
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"multilingual",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-03T11:51:45Z |
2025-02-26T16:06:19+00:00
| 0 | 0 |
---
language:
- multilingual
license: mit
metrics:
- accuracy
- f1-score
tags:
- zero-shot-classification
- text-classification
- pytorch
extra_gated_prompt: 'Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
If you use our models for your work or research, please cite this paper: Sebők,
M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large
Language Models for Multilingual Policy Topic Classification: The Babel Machine
Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434'
extra_gated_fields:
Name: text
Country: country
Institution: text
Institution Email: text
Please specify your academic use case: text
---
# xlm-roberta-large-english-party-cap-v3
## Model description
An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `party` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes.
## How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-english-party-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
```
The translation table from the model results to CAP codes is the following:
```python
CAP_NUM_DICT = {
0: 1,
1: 2,
2: 3,
3: 4,
4: 5,
5: 6,
6: 7,
7: 8,
8: 9,
9: 10,
10: 12,
11: 13,
12: 14,
13: 15,
14: 16,
15: 17,
16: 18,
17: 19,
18: 20,
19: 21,
20: 23,
21: 999,
}
```
We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases.
### Gated access
Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead.
## Model performance
The model was evaluated on a test set of 13879 examples (10% of the available data).<br>
Model accuracy is **0.73**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.69 | 0.73 | 0.71 | 1142 |
| 1 | 0.68 | 0.7 | 0.69 | 705 |
| 2 | 0.79 | 0.85 | 0.82 | 865 |
| 3 | 0.79 | 0.77 | 0.78 | 362 |
| 4 | 0.72 | 0.64 | 0.68 | 628 |
| 5 | 0.87 | 0.83 | 0.85 | 936 |
| 6 | 0.68 | 0.71 | 0.7 | 430 |
| 7 | 0.88 | 0.8 | 0.84 | 360 |
| 8 | 0.72 | 0.75 | 0.74 | 198 |
| 9 | 0.85 | 0.79 | 0.82 | 327 |
| 10 | 0.8 | 0.75 | 0.77 | 903 |
| 11 | 0.61 | 0.68 | 0.64 | 752 |
| 12 | 0.66 | 0.79 | 0.72 | 531 |
| 13 | 0.65 | 0.61 | 0.63 | 406 |
| 14 | 0.83 | 0.75 | 0.79 | 964 |
| 15 | 0.71 | 0.74 | 0.73 | 234 |
| 16 | 0.71 | 0.67 | 0.69 | 253 |
| 17 | 0.77 | 0.83 | 0.8 | 1637 |
| 18 | 0.71 | 0.59 | 0.65 | 910 |
| 19 | 0.73 | 0.74 | 0.73 | 366 |
| 20 | 0.76 | 0.61 | 0.68 | 77 |
| 21 | 0.59 | 0.6 | 0.59 | 893 |
| macro avg | 0.74 | 0.72 | 0.73 | 13879 |
| weighted avg | 0.74 | 0.73 | 0.73 | 13879 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| null |
Non_BioNLP
|
# xlm-roberta-large-english-party-cap-v3
## Model description
An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `party` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes.
## How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-english-party-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
```
The translation table from the model results to CAP codes is the following:
```python
CAP_NUM_DICT = {
0: 1,
1: 2,
2: 3,
3: 4,
4: 5,
5: 6,
6: 7,
7: 8,
8: 9,
9: 10,
10: 12,
11: 13,
12: 14,
13: 15,
14: 16,
15: 17,
16: 18,
17: 19,
18: 20,
19: 21,
20: 23,
21: 999,
}
```
We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases.
### Gated access
Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead.
## Model performance
The model was evaluated on a test set of 13879 examples (10% of the available data).<br>
Model accuracy is **0.73**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.69 | 0.73 | 0.71 | 1142 |
| 1 | 0.68 | 0.7 | 0.69 | 705 |
| 2 | 0.79 | 0.85 | 0.82 | 865 |
| 3 | 0.79 | 0.77 | 0.78 | 362 |
| 4 | 0.72 | 0.64 | 0.68 | 628 |
| 5 | 0.87 | 0.83 | 0.85 | 936 |
| 6 | 0.68 | 0.71 | 0.7 | 430 |
| 7 | 0.88 | 0.8 | 0.84 | 360 |
| 8 | 0.72 | 0.75 | 0.74 | 198 |
| 9 | 0.85 | 0.79 | 0.82 | 327 |
| 10 | 0.8 | 0.75 | 0.77 | 903 |
| 11 | 0.61 | 0.68 | 0.64 | 752 |
| 12 | 0.66 | 0.79 | 0.72 | 531 |
| 13 | 0.65 | 0.61 | 0.63 | 406 |
| 14 | 0.83 | 0.75 | 0.79 | 964 |
| 15 | 0.71 | 0.74 | 0.73 | 234 |
| 16 | 0.71 | 0.67 | 0.69 | 253 |
| 17 | 0.77 | 0.83 | 0.8 | 1637 |
| 18 | 0.71 | 0.59 | 0.65 | 910 |
| 19 | 0.73 | 0.74 | 0.73 | 366 |
| 20 | 0.76 | 0.61 | 0.68 | 77 |
| 21 | 0.59 | 0.6 | 0.59 | 893 |
| macro avg | 0.74 | 0.72 | 0.73 | 13879 |
| weighted avg | 0.74 | 0.73 | 0.73 | 13879 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
|
{"language": ["multilingual"], "license": "mit", "metrics": ["accuracy", "f1-score"], "tags": ["zero-shot-classification", "text-classification", "pytorch"], "extra_gated_prompt": "Our models are intended for academic use only. If you are not affiliated with an academic institution, please provide a rationale for using our models. Please allow us a few business days to manually review subscriptions.\nIf you use our models for your work or research, please cite this paper: Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434", "extra_gated_fields": {"Name": "text", "Country": "country", "Institution": "text", "Institution Email": "text", "Please specify your academic use case": "text"}}
|
task
|
[
"TRANSLATION"
] | 46,744 |
jzhong22/marian-finetuned-kde4-en-to-fr
|
jzhong22
|
translation
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-04T01:56:37Z |
2024-12-04T04:14:18+00:00
| 5 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
library_name: transformers
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 46,745 |
rishabhjain16/whisper-medium
|
rishabhjain16
|
automatic-speech-recognition
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2024-02-12T16:07:00Z |
2024-02-12T16:07:01+00:00
| 12 | 0 |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- false
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-medium
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- type: wer
value: 2.9
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 5.9
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- type: wer
value: 53.87
name: Test WER
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Medium on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
2.900409225488902
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-medium",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| null |
Non_BioNLP
|
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Medium on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
2.900409225488902
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-medium",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
{"language": ["en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", false, "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su"], "license": "apache-2.0", "pipeline_tag": "automatic-speech-recognition", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "whisper-medium", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 2.9, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 5.9, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0", "type": "mozilla-foundation/common_voice_11_0", "config": "hi", "split": "test", "args": {"language": "hi"}}, "metrics": [{"type": "wer", "value": 53.87, "name": "Test WER"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 46,746 |
gpustack/bce-embedding-base_v1-GGUF
|
gpustack
|
feature-extraction
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-31T15:37:54Z |
2024-11-01T03:02:40+00:00
| 645 | 0 |
---
language:
- en
- zh
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# bce-embedding-base_v1-GGUF
**Model creator**: [maidalun1020](https://huggingface.co/maidalun1020)<br/>
**Original model**: [maidalun1020/bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)<br/>
**GGUF quantization**: based on llama.cpp release [61408e7f](https://github.com/ggerganov/llama.cpp/commit/61408e7fad082dc44a11c8a9f1398da4837aad44)
---
<!--
* @Description:
* @Author: shenlei
* @Date: 2023-12-19 10:31:41
* @LastEditTime: 2024-01-09 23:52:00
* @LastEditors: shenlei
-->
<h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1>
<p align="center">
<a href="https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE">
<img src="https://img.shields.io/badge/license-Apache--2.0-yellow">
</a>
<a href="https://twitter.com/YDopensource">
<img src="https://img.shields.io/badge/follow-%40YDOpenSource-1DA1F2?logo=twitter&style={style}">
</a>
</p>
最新、最详细的bce-embedding-base_v1相关信息,请移步(The latest "Updates" should be checked in):
<p align="left">
<a href="https://github.com/netease-youdao/BCEmbedding">GitHub</a>
</p>
## 主要特点(Key Features):
- 中英双语,以及中英跨语种能力(Bilingual and Crosslingual capability in English and Chinese);
- RAG优化,适配更多真实业务场景(RAG adaptation for more domains, including Education, Law, Finance, Medical, Literature, FAQ, Textbook, Wikipedia, etc.);
- 方便集成进langchain和llamaindex(Easy integrations for langchain and llamaindex in <a href="https://github.com/netease-youdao/BCEmbedding">BCEmbedding</a>)。
- `EmbeddingModel`不需要“精心设计”instruction,尽可能召回有用片段。 (No need for "instruction")
- **最佳实践(Best practice)** :embedding召回top50-100片段,reranker对这50-100片段精排,最后取top5-10片段。(1. Get top 50-100 passages with [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) for "`recall`"; 2. Rerank passages with [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) and get top 5-10 for "`precision`" finally. )
## News:
- `BCEmbedding`技术博客( **Technical Blog** ): [为RAG而生-BCEmbedding技术报告](https://zhuanlan.zhihu.com/p/681370855)
- Related link for **RerankerModel** : [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)
## Third-party Examples:
- RAG applications: [QAnything](https://github.com/netease-youdao/qanything), [HuixiangDou](https://github.com/InternLM/HuixiangDou), [ChatPDF](https://github.com/shibing624/ChatPDF).
- Efficient inference framework: [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp), [Xinference](https://github.com/xorbitsai/inference), [mindnlp (Huawei GPU, 华为GPU)](https://github.com/mindspore-lab/mindnlp/tree/master/llm/inference/bce).


-----------------------------------------
<details open="open">
<summary>Click to Open Contents</summary>
- <a href="#-bilingual-and-crosslingual-superiority" target="_Self">🌐 Bilingual and Crosslingual Superiority</a>
- <a href="#-key-features" target="_Self">💡 Key Features</a>
- <a href="#-latest-updates" target="_Self">🚀 Latest Updates</a>
- <a href="#-model-list" target="_Self">🍎 Model List</a>
- <a href="#-manual" target="_Self">📖 Manual</a>
- <a href="#installation" target="_Self">Installation</a>
- <a href="#quick-start" target="_Self">Quick Start (`transformers`, `sentence-transformers`)</a>
- <a href="#integrations-for-rag-frameworks" target="_Self">Integrations for RAG Frameworks (`langchain`, `llama_index`)</a>
- <a href="#%EF%B8%8F-evaluation" target="_Self">⚙️ Evaluation</a>
- <a href="#evaluate-semantic-representation-by-mteb" target="_Self">Evaluate Semantic Representation by MTEB</a>
- <a href="#evaluate-rag-by-llamaindex" target="_Self">Evaluate RAG by LlamaIndex</a>
- <a href="#-leaderboard" target="_Self">📈 Leaderboard</a>
- <a href="#semantic-representation-evaluations-in-mteb" target="_Self">Semantic Representation Evaluations in MTEB</a>
- <a href="#rag-evaluations-in-llamaindex" target="_Self">RAG Evaluations in LlamaIndex</a>
- <a href="#-youdaos-bcembedding-api" target="_Self">🛠 Youdao's BCEmbedding API</a>
- <a href="#-wechat-group" target="_Self">🧲 WeChat Group</a>
- <a href="#%EF%B8%8F-citation" target="_Self">✏️ Citation</a>
- <a href="#-license" target="_Self">🔐 License</a>
- <a href="#-related-links" target="_Self">🔗 Related Links</a>
</details>
<br>
**B**ilingual and **C**rosslingual **Embedding** (`BCEmbedding`), developed by NetEase Youdao, encompasses `EmbeddingModel` and `RerankerModel`. The `EmbeddingModel` specializes in generating semantic vectors, playing a crucial role in semantic search and question-answering, and the `RerankerModel` excels at refining search results and ranking tasks.
`BCEmbedding` serves as the cornerstone of Youdao's Retrieval Augmented Generation (RAG) implmentation, notably [QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)], an open-source implementation widely integrated in various Youdao products like [Youdao Speed Reading](https://read.youdao.com/#/home) and [Youdao Translation](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation).
Distinguished for its bilingual and crosslingual proficiency, `BCEmbedding` excels in bridging Chinese and English linguistic gaps, which achieves
- **A high performence on <a href="#semantic-representation-evaluations-in-mteb">Semantic Representation Evaluations in MTEB</a>**;
- **A new benchmark in the realm of <a href="#rag-evaluations-in-llamaindex">RAG Evaluations in LlamaIndex</a>**.
`BCEmbedding`是由网易有道开发的双语和跨语种语义表征算法模型库,其中包含`EmbeddingModel`和`RerankerModel`两类基础模型。`EmbeddingModel`专门用于生成语义向量,在语义搜索和问答中起着关键作用,而`RerankerModel`擅长优化语义搜索结果和语义相关顺序精排。
`BCEmbedding`作为有道的检索增强生成式应用(RAG)的基石,特别是在[QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)]中发挥着重要作用。QAnything作为一个网易有道开源项目,在有道许多产品中有很好的应用实践,比如[有道速读](https://read.youdao.com/#/home)和[有道翻译](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation)
`BCEmbedding`以其出色的双语和跨语种能力而著称,在语义检索中消除中英语言之间的差异,从而实现:
- **强大的双语和跨语种语义表征能力【<a href="#semantic-representation-evaluations-in-mteb">基于MTEB的语义表征评测指标</a>】。**
- **基于LlamaIndex的RAG评测,表现SOTA【<a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>】。**
## 🌐 Bilingual and Crosslingual Superiority
Existing embedding models often encounter performance challenges in bilingual and crosslingual scenarios, particularly in Chinese, English and their crosslingual tasks. `BCEmbedding`, leveraging the strength of Youdao's translation engine, excels in delivering superior performance across monolingual, bilingual, and crosslingual settings.
`EmbeddingModel` supports ***Chinese (ch) and English (en)*** (more languages support will come soon), while `RerankerModel` supports ***Chinese (ch), English (en), Japanese (ja) and Korean (ko)***.
现有的单个语义表征模型在双语和跨语种场景中常常表现不佳,特别是在中文、英文及其跨语种任务中。`BCEmbedding`充分利用有道翻译引擎的优势,实现只需一个模型就可以在单语、双语和跨语种场景中表现出卓越的性能。
`EmbeddingModel`支持***中文和英文***(之后会支持更多语种);`RerankerModel`支持***中文,英文,日文和韩文***。
## 💡 Key Features
- **Bilingual and Crosslingual Proficiency**: Powered by Youdao's translation engine, excelling in Chinese, English and their crosslingual retrieval task, with upcoming support for additional languages.
- **RAG-Optimized**: Tailored for diverse RAG tasks including **translation, summarization, and question answering**, ensuring accurate **query understanding**. See <a href=#rag-evaluations-in-llamaindex>RAG Evaluations in LlamaIndex</a>.
- **Efficient and Precise Retrieval**: Dual-encoder for efficient retrieval of `EmbeddingModel` in first stage, and cross-encoder of `RerankerModel` for enhanced precision and deeper semantic analysis in second stage.
- **Broad Domain Adaptability**: Trained on diverse datasets for superior performance across various fields.
- **User-Friendly Design**: Instruction-free, versatile use for multiple tasks without specifying query instruction for each task.
- **Meaningful Reranking Scores**: `RerankerModel` provides relevant scores to improve result quality and optimize large language model performance.
- **Proven in Production**: Successfully implemented and validated in Youdao's products.
- **双语和跨语种能力**:基于有道翻译引擎的强大能力,我们的`BCEmbedding`具备强大的中英双语和跨语种语义表征能力。
- **RAG适配**:面向RAG做了针对性优化,可以适配大多数相关任务,比如**翻译,摘要,问答**等。此外,针对**问题理解**(query understanding)也做了针对优化,详见 <a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>。
- **高效且精确的语义检索**:`EmbeddingModel`采用双编码器,可以在第一阶段实现高效的语义检索。`RerankerModel`采用交叉编码器,可以在第二阶段实现更高精度的语义顺序精排。
- **更好的领域泛化性**:为了在更多场景实现更好的效果,我们收集了多种多样的领域数据。
- **用户友好**:语义检索时不需要特殊指令前缀。也就是,你不需要为各种任务绞尽脑汁设计指令前缀。
- **有意义的重排序分数**:`RerankerModel`可以提供有意义的语义相关性分数(不仅仅是排序),可以用于过滤无意义文本片段,提高大模型生成效果。
- **产品化检验**:`BCEmbedding`已经被有道众多真实产品检验。
## 🚀 Latest Updates
- ***2024-01-03***: **Model Releases** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) and [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) are available.
- ***2024-01-03***: **Eval Datasets** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - Evaluate the performence of RAG, using [LlamaIndex](https://github.com/run-llama/llama_index).
- ***2024-01-03***: **Eval Datasets** [[Details](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - Evaluate the performence of crosslingual semantic representation, using [MTEB](https://github.com/embeddings-benchmark/mteb).
- ***2024-01-03***: **模型发布** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)和[bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)已发布.
- ***2024-01-03***: **RAG评测数据** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - 基于[LlamaIndex](https://github.com/run-llama/llama_index)的RAG评测数据已发布。
- ***2024-01-03***: **跨语种语义表征评测数据** [[详情](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - 基于[MTEB](https://github.com/embeddings-benchmark/mteb)的跨语种评测数据已发布.
## 🍎 Model List
| Model Name | Model Type | Languages | Parameters | Weights |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|
| bce-embedding-base_v1 | `EmbeddingModel` | ch, en | 279M | [download](https://huggingface.co/maidalun1020/bce-embedding-base_v1) |
| bce-reranker-base_v1 | `RerankerModel` | ch, en, ja, ko | 279M | [download](https://huggingface.co/maidalun1020/bce-reranker-base_v1) |
## 📖 Manual
### Installation
First, create a conda environment and activate it.
```bash
conda create --name bce python=3.10 -y
conda activate bce
```
Then install `BCEmbedding` for minimal installation:
```bash
pip install BCEmbedding==0.1.1
```
Or install from source:
```bash
git clone [email protected]:netease-youdao/BCEmbedding.git
cd BCEmbedding
pip install -v -e .
```
### Quick Start
#### 1. Based on `BCEmbedding`
Use `EmbeddingModel`, and `cls` [pooler](./BCEmbedding/models/embedding.py#L24) is default.
```python
from BCEmbedding import EmbeddingModel
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
model = EmbeddingModel(model_name_or_path="maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences)
```
Use `RerankerModel` to calculate relevant scores and rerank:
```python
from BCEmbedding import RerankerModel
# your query and corresponding passages
query = 'input_query'
passages = ['passage_0', 'passage_1', ...]
# construct sentence pairs
sentence_pairs = [[query, passage] for passage in passages]
# init reranker model
model = RerankerModel(model_name_or_path="maidalun1020/bce-reranker-base_v1")
# method 0: calculate scores of sentence pairs
scores = model.compute_score(sentence_pairs)
# method 1: rerank passages
rerank_results = model.rerank(query, passages)
```
NOTE:
- In [`RerankerModel.rerank`](./BCEmbedding/models/reranker.py#L137) method, we provide an advanced preproccess that we use in production for making `sentence_pairs`, when "passages" are very long.
#### 2. Based on `transformers`
For `EmbeddingModel`:
```python
from transformers import AutoModel, AutoTokenizer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-embedding-base_v1')
model = AutoModel.from_pretrained('maidalun1020/bce-embedding-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(self.device) for k, v in inputs.items()}
# get embeddings
outputs = model(**inputs_on_device, return_dict=True)
embeddings = outputs.last_hidden_state[:, 0] # cls pooler
embeddings = embeddings / embeddings.norm(dim=1, keepdim=True) # normalize
```
For `RerankerModel`:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-reranker-base_v1')
model = AutoModelForSequenceClassification.from_pretrained('maidalun1020/bce-reranker-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentence_pairs, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(device) for k, v in inputs.items()}
# calculate scores
scores = model(**inputs_on_device, return_dict=True).logits.view(-1,).float()
scores = torch.sigmoid(scores)
```
#### 3. Based on `sentence_transformers`
For `EmbeddingModel`:
```python
from sentence_transformers import SentenceTransformer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
## New update for sentence-trnasformers. So clean up your "`SENTENCE_TRANSFORMERS_HOME`/maidalun1020_bce-embedding-base_v1" or "~/.cache/torch/sentence_transformers/maidalun1020_bce-embedding-base_v1" first for downloading new version.
model = SentenceTransformer("maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences, normalize_embeddings=True)
```
For `RerankerModel`:
```python
from sentence_transformers import CrossEncoder
# init reranker model
model = CrossEncoder('maidalun1020/bce-reranker-base_v1', max_length=512)
# calculate scores of sentence pairs
scores = model.predict(sentence_pairs)
```
### Integrations for RAG Frameworks
#### 1. Used in `langchain`
```python
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.vectorstores.utils import DistanceStrategy
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_name = 'maidalun1020/bce-embedding-base_v1'
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'batch_size': 64, 'normalize_embeddings': True, 'show_progress_bar': False}
embed_model = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
# example #1. extract embeddings
query_embedding = embed_model.embed_query(query)
passages_embeddings = embed_model.embed_documents(passages)
# example #2. langchain retriever example
faiss_vectorstore = FAISS.from_texts(passages, embed_model, distance_strategy=DistanceStrategy.MAX_INNER_PRODUCT)
retriever = faiss_vectorstore.as_retriever(search_type="similarity", search_kwargs={"score_threshold": 0.5, "k": 3})
related_passages = retriever.get_relevant_documents(query)
```
#### 2. Used in `llama_index`
```python
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index import VectorStoreIndex, ServiceContext, SimpleDirectoryReader
from llama_index.node_parser import SimpleNodeParser
from llama_index.llms import OpenAI
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_args = {'model_name': 'maidalun1020/bce-embedding-base_v1', 'max_length': 512, 'embed_batch_size': 64, 'device': 'cuda'}
embed_model = HuggingFaceEmbedding(**model_args)
# example #1. extract embeddings
query_embedding = embed_model.get_query_embedding(query)
passages_embeddings = embed_model.get_text_embedding_batch(passages)
# example #2. rag example
llm = OpenAI(model='gpt-3.5-turbo-0613', api_key=os.environ.get('OPENAI_API_KEY'), api_base=os.environ.get('OPENAI_BASE_URL'))
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
documents = SimpleDirectoryReader(input_files=["BCEmbedding/tools/eval_rag/eval_pdfs/Comp_en_llama2.pdf"]).load_data()
node_parser = SimpleNodeParser.from_defaults(chunk_size=512)
nodes = node_parser.get_nodes_from_documents(documents[0:36])
index = VectorStoreIndex(nodes, service_context=service_context)
query_engine = index.as_query_engine()
response = query_engine.query("What is llama?")
```
## ⚙️ Evaluation
### Evaluate Semantic Representation by MTEB
We provide evaluateion tools for `embedding` and `reranker` models, based on [MTEB](https://github.com/embeddings-benchmark/mteb) and [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB).
我们基于[MTEB](https://github.com/embeddings-benchmark/mteb)和[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB),提供`embedding`和`reranker`模型的语义表征评测工具。
#### 1. Embedding Models
Just run following cmd to evaluate `your_embedding_model` (e.g. `maidalun1020/bce-embedding-base_v1`) in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_embedding_model`(比如,`maidalun1020/bce-embedding-base_v1`)。评测任务将会在**双语和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path maidalun1020/bce-embedding-base_v1 --pooler cls
```
The total evaluation tasks contain ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"**.
评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的 ***114个数据集***。
***NOTE:***
- **All models are evaluated in their recommended pooling method (`pooler`)**.
- `mean` pooler: "jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large" and "gte-large".
- `cls` pooler: Other models.
- "jina-embeddings-v2-base-en" model should be loaded with `trust_remote_code`.
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path {moka-ai/m3e-base | moka-ai/m3e-large} --pooler mean
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path jinaai/jina-embeddings-v2-base-en --pooler mean --trust_remote_code
```
***注意:***
- 所有模型的评测采用各自推荐的`pooler`。"jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large"和"gte-large"的 `pooler`采用`mean`,其他模型的`pooler`采用`cls`.
- "jina-embeddings-v2-base-en"模型在载入时需要`trust_remote_code`。
#### 2. Reranker Models
Run following cmd to evaluate `your_reranker_model` (e.g. "maidalun1020/bce-reranker-base_v1") in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_reranker_model`(比如,`maidalun1020/bce-reranker-base_v1`)。评测任务将会在 **双语种和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_reranker_mteb.py --model_name_or_path maidalun1020/bce-reranker-base_v1
```
The evaluation tasks contain ***12 datastes*** of **"Reranking"**.
评测包含 **"Reranking"** 任务的 ***12个数据集***。
#### 3. Metrics Visualization Tool
We proveide a one-click script to sumarize evaluation results of `embedding` and `reranker` models as [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md) and [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
我们提供了`embedding`和`reranker`模型的指标可视化一键脚本,输出一个markdown文件,详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)和[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)。
```bash
python BCEmbedding/evaluation/mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir}
```
### Evaluate RAG by LlamaIndex
[LlamaIndex](https://github.com/run-llama/llama_index) is a famous data framework for LLM-based applications, particularly in RAG. Recently, the [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our `BCEmbedding`.
[LlamaIndex](https://github.com/run-llama/llama_index)是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证`BCEmbedding`在RAG中的效果。
First, install LlamaIndex:
```bash
pip install llama-index==0.9.22
```
#### 1. Metrics Definition
- Hit Rate:
Hit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. ***The larger, the better.***
- Mean Reciprocal Rank (MRR):
For each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. ***The larger, the better.***
- 命中率(Hit Rate)
命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。***该指标越大越好。***
- 平均倒数排名(Mean Reciprocal Rank,MRR)
对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。***该指标越大越好。***
#### 2. Reproduce [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
In order to compare our `BCEmbedding` with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our `BCEmbedding`:
为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将`BCEmbedding`与其他embedding和reranker模型进行对比分析:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_reproduce.py
```
Then, sumarize the evaluation results by:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_reproduce_results
```
Results Reproduced from the LlamaIndex Blog can be checked in ***[Reproduced Summary of RAG Evaluation](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***, with some obvious ***conclusions***:
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- ***The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA.***
输出的指标汇总详见 ***[LlamaIndex RAG评测结果复现](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***。从该复现结果中,可以看出:
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`比其他embedding模型效果都要好。
- 在固定embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
#### 3. Broad Domain Adaptability
The evaluation of [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) is **monolingual, small amount of data, and specific domain** (just including "llama2" paper). In order to evaluate the **broad domain adaptability, bilingual and crosslingual capability**, we follow the blog to build a multiple domains evaluation dataset (includding "Computer Science", "Physics", "Biology", "Economics", "Math", and "Quantitative Finance"), named [CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset), **by OpenAI `gpt-4-1106-preview` for high quality**.
在上述的[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)的评测数据只用了“llama2”这一篇文章,该评测是 **单语种,小数据量,特定领域** 的。为了兼容更真实更广的用户使用场景,评测算法模型的 **领域泛化性,双语和跨语种能力**,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)。**为了保证构建数据的高质量,我们采用OpenAI的`gpt-4-1106-preview`。**
First, run following cmd to evaluate the most popular and powerful embedding and reranker models:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_multiple_domains.py
```
Then, run the following script to sumarize the evaluation results:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_results
```
The summary of multiple domains evaluations can be seen in <a href=#1-multiple-domains-scenarios>Multiple Domains Scenarios</a>.
## 📈 Leaderboard
### Semantic Representation Evaluations in MTEB
#### 1. Embedding Models
| Model | Dimensions | Pooler | Instructions | Retrieval (47) | STS (19) | PairClassification (5) | Classification (21) | Reranking (12) | Clustering (15) | ***AVG*** (119) |
|:--------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| bge-base-en-v1.5 | 768 | `cls` | Need | 37.14 | 55.06 | 75.45 | 59.73 | 43.00 | 37.74 | 47.19 |
| bge-base-zh-v1.5 | 768 | `cls` | Need | 47.63 | 63.72 | 77.40 | 63.38 | 54.95 | 32.56 | 53.62 |
| bge-large-en-v1.5 | 1024 | `cls` | Need | 37.18 | 54.09 | 75.00 | 59.24 | 42.47 | 37.32 | 46.80 |
| bge-large-zh-v1.5 | 1024 | `cls` | Need | 47.58 | 64.73 | 79.14 | 64.19 | 55.98 | 33.26 | 54.23 |
| e5-large-v2 | 1024 | `mean` | Need | 35.98 | 55.23 | 75.28 | 59.53 | 42.12 | 36.51 | 46.52 |
| gte-large | 1024 | `mean` | Free | 36.68 | 55.22 | 74.29 | 57.73 | 42.44 | 38.51 | 46.67 |
| gte-large-zh | 1024 | `cls` | Free | 41.15 | 64.62 | 77.58 | 62.04 | 55.62 | 33.03 | 51.51 |
| jina-embeddings-v2-base-en | 768 | `mean` | Free | 31.58 | 54.28 | 74.84 | 58.42 | 41.16 | 34.67 | 44.29 |
| m3e-base | 768 | `mean` | Free | 46.29 | 63.93 | 71.84 | 64.08 | 52.38 | 37.84 | 53.54 |
| m3e-large | 1024 | `mean` | Free | 34.85 | 59.74 | 67.69 | 60.07 | 48.99 | 31.62 | 46.78 |
| multilingual-e5-base | 768 | `mean` | Need | 54.73 | 65.49 | 76.97 | 69.72 | 55.01 | 38.44 | 58.34 |
| multilingual-e5-large | 1024 | `mean` | Need | 56.76 | 66.79 | 78.80 | 71.61 | 56.49 | 43.09 | 60.50 |
| ***bce-embedding-base_v1*** | 768 | `cls` | Free | 57.60 | 65.73 | 74.96 | 69.00 | 57.29 | 38.95 | 59.43 |
***NOTE:***
- Our ***bce-embedding-base_v1*** outperforms other opensource embedding models with comparable model size.
- ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- The [crosslingual evaluation datasets](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py) we released belong to `Retrieval` task.
- More evaluation details please check [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md).
***要点:***
- 对比其他开源的相同规模的embedding模型,***bce-embedding-base_v1*** 表现最好,效果比最好的large模型稍差。
- 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的共 ***114个数据集***。
- 我们开源的[跨语种语义表征评测数据](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)属于`Retrieval`任务。
- 更详细的评测结果详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)。
#### 2. Reranker Models
| Model | Reranking (12) | ***AVG*** (12) |
| :--------------------------------- | :-------------: | :--------------------: |
| bge-reranker-base | 59.04 | 59.04 |
| bge-reranker-large | 60.86 | 60.86 |
| ***bce-reranker-base_v1*** | **61.29** | ***61.29*** |
***NOTE:***
- Our ***bce-reranker-base_v1*** outperforms other opensource reranker models.
- ***12 datastes*** of **"Reranking"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- More evaluation details please check [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
***要点:***
- ***bce-reranker-base_v1*** 优于其他开源reranker模型。
- 评测包含 **"Reranking"** 任务的 ***12个数据集***。
- 更详细的评测结果详见[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)
### RAG Evaluations in LlamaIndex
#### 1. Multiple Domains Scenarios

***NOTE:***
- Evaluated in **`["en", "zh", "en-zh", "zh-en"]` setting**.
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- **The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA**.
***要点:***
- 评测是在`["en", "zh", "en-zh", "zh-en"]`设置下。
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`优于其他Embedding模型,包括开源和闭源。
- 在固定Embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好,包括开源和闭源。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
## 🛠 Youdao's BCEmbedding API
For users who prefer a hassle-free experience without the need to download and configure the model on their own systems, `BCEmbedding` is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at [Youdao BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html). Here, you'll find all the necessary guidance to easily implement `BCEmbedding` across a variety of use cases, ensuring a smooth and effective integration for optimal results.
对于那些更喜欢直接调用api的用户,有道提供方便的`BCEmbedding`调用api。该方式是一种简化和高效的方式,将`BCEmbedding`集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见[有道BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html)。
## 🧲 WeChat Group
Welcome to scan the QR code below and join the WeChat group.
欢迎大家扫码加入官方微信交流群。

## ✏️ Citation
If you use `BCEmbedding` in your research or project, please feel free to cite and star it:
如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~
```
@misc{youdao_bcembedding_2023,
title={BCEmbedding: Bilingual and Crosslingual Embedding for RAG},
author={NetEase Youdao, Inc.},
year={2023},
howpublished={\url{https://github.com/netease-youdao/BCEmbedding}}
}
```
## 🔐 License
`BCEmbedding` is licensed under [Apache 2.0 License](https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE)
## 🔗 Related Links
[Netease Youdao - QAnything](https://github.com/netease-youdao/qanything)
[FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)
[MTEB](https://github.com/embeddings-benchmark/mteb)
[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
[LLama Index](https://github.com/run-llama/llama_index) | [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
| null |
Non_BioNLP
|
# bce-embedding-base_v1-GGUF
**Model creator**: [maidalun1020](https://huggingface.co/maidalun1020)<br/>
**Original model**: [maidalun1020/bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)<br/>
**GGUF quantization**: based on llama.cpp release [61408e7f](https://github.com/ggerganov/llama.cpp/commit/61408e7fad082dc44a11c8a9f1398da4837aad44)
---
<!--
* @Description:
* @Author: shenlei
* @Date: 2023-12-19 10:31:41
* @LastEditTime: 2024-01-09 23:52:00
* @LastEditors: shenlei
-->
<h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1>
<p align="center">
<a href="https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE">
<img src="https://img.shields.io/badge/license-Apache--2.0-yellow">
</a>
<a href="https://twitter.com/YDopensource">
<img src="https://img.shields.io/badge/follow-%40YDOpenSource-1DA1F2?logo=twitter&style={style}">
</a>
</p>
最新、最详细的bce-embedding-base_v1相关信息,请移步(The latest "Updates" should be checked in):
<p align="left">
<a href="https://github.com/netease-youdao/BCEmbedding">GitHub</a>
</p>
## 主要特点(Key Features):
- 中英双语,以及中英跨语种能力(Bilingual and Crosslingual capability in English and Chinese);
- RAG优化,适配更多真实业务场景(RAG adaptation for more domains, including Education, Law, Finance, Medical, Literature, FAQ, Textbook, Wikipedia, etc.);
- 方便集成进langchain和llamaindex(Easy integrations for langchain and llamaindex in <a href="https://github.com/netease-youdao/BCEmbedding">BCEmbedding</a>)。
- `EmbeddingModel`不需要“精心设计”instruction,尽可能召回有用片段。 (No need for "instruction")
- **最佳实践(Best practice)** :embedding召回top50-100片段,reranker对这50-100片段精排,最后取top5-10片段。(1. Get top 50-100 passages with [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) for "`recall`"; 2. Rerank passages with [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) and get top 5-10 for "`precision`" finally. )
## News:
- `BCEmbedding`技术博客( **Technical Blog** ): [为RAG而生-BCEmbedding技术报告](https://zhuanlan.zhihu.com/p/681370855)
- Related link for **RerankerModel** : [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)
## Third-party Examples:
- RAG applications: [QAnything](https://github.com/netease-youdao/qanything), [HuixiangDou](https://github.com/InternLM/HuixiangDou), [ChatPDF](https://github.com/shibing624/ChatPDF).
- Efficient inference framework: [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp), [Xinference](https://github.com/xorbitsai/inference), [mindnlp (Huawei GPU, 华为GPU)](https://github.com/mindspore-lab/mindnlp/tree/master/llm/inference/bce).


-----------------------------------------
<details open="open">
<summary>Click to Open Contents</summary>
- <a href="#-bilingual-and-crosslingual-superiority" target="_Self">🌐 Bilingual and Crosslingual Superiority</a>
- <a href="#-key-features" target="_Self">💡 Key Features</a>
- <a href="#-latest-updates" target="_Self">🚀 Latest Updates</a>
- <a href="#-model-list" target="_Self">🍎 Model List</a>
- <a href="#-manual" target="_Self">📖 Manual</a>
- <a href="#installation" target="_Self">Installation</a>
- <a href="#quick-start" target="_Self">Quick Start (`transformers`, `sentence-transformers`)</a>
- <a href="#integrations-for-rag-frameworks" target="_Self">Integrations for RAG Frameworks (`langchain`, `llama_index`)</a>
- <a href="#%EF%B8%8F-evaluation" target="_Self">⚙️ Evaluation</a>
- <a href="#evaluate-semantic-representation-by-mteb" target="_Self">Evaluate Semantic Representation by MTEB</a>
- <a href="#evaluate-rag-by-llamaindex" target="_Self">Evaluate RAG by LlamaIndex</a>
- <a href="#-leaderboard" target="_Self">📈 Leaderboard</a>
- <a href="#semantic-representation-evaluations-in-mteb" target="_Self">Semantic Representation Evaluations in MTEB</a>
- <a href="#rag-evaluations-in-llamaindex" target="_Self">RAG Evaluations in LlamaIndex</a>
- <a href="#-youdaos-bcembedding-api" target="_Self">🛠 Youdao's BCEmbedding API</a>
- <a href="#-wechat-group" target="_Self">🧲 WeChat Group</a>
- <a href="#%EF%B8%8F-citation" target="_Self">✏️ Citation</a>
- <a href="#-license" target="_Self">🔐 License</a>
- <a href="#-related-links" target="_Self">🔗 Related Links</a>
</details>
<br>
**B**ilingual and **C**rosslingual **Embedding** (`BCEmbedding`), developed by NetEase Youdao, encompasses `EmbeddingModel` and `RerankerModel`. The `EmbeddingModel` specializes in generating semantic vectors, playing a crucial role in semantic search and question-answering, and the `RerankerModel` excels at refining search results and ranking tasks.
`BCEmbedding` serves as the cornerstone of Youdao's Retrieval Augmented Generation (RAG) implmentation, notably [QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)], an open-source implementation widely integrated in various Youdao products like [Youdao Speed Reading](https://read.youdao.com/#/home) and [Youdao Translation](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation).
Distinguished for its bilingual and crosslingual proficiency, `BCEmbedding` excels in bridging Chinese and English linguistic gaps, which achieves
- **A high performence on <a href="#semantic-representation-evaluations-in-mteb">Semantic Representation Evaluations in MTEB</a>**;
- **A new benchmark in the realm of <a href="#rag-evaluations-in-llamaindex">RAG Evaluations in LlamaIndex</a>**.
`BCEmbedding`是由网易有道开发的双语和跨语种语义表征算法模型库,其中包含`EmbeddingModel`和`RerankerModel`两类基础模型。`EmbeddingModel`专门用于生成语义向量,在语义搜索和问答中起着关键作用,而`RerankerModel`擅长优化语义搜索结果和语义相关顺序精排。
`BCEmbedding`作为有道的检索增强生成式应用(RAG)的基石,特别是在[QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)]中发挥着重要作用。QAnything作为一个网易有道开源项目,在有道许多产品中有很好的应用实践,比如[有道速读](https://read.youdao.com/#/home)和[有道翻译](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation)
`BCEmbedding`以其出色的双语和跨语种能力而著称,在语义检索中消除中英语言之间的差异,从而实现:
- **强大的双语和跨语种语义表征能力【<a href="#semantic-representation-evaluations-in-mteb">基于MTEB的语义表征评测指标</a>】。**
- **基于LlamaIndex的RAG评测,表现SOTA【<a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>】。**
## 🌐 Bilingual and Crosslingual Superiority
Existing embedding models often encounter performance challenges in bilingual and crosslingual scenarios, particularly in Chinese, English and their crosslingual tasks. `BCEmbedding`, leveraging the strength of Youdao's translation engine, excels in delivering superior performance across monolingual, bilingual, and crosslingual settings.
`EmbeddingModel` supports ***Chinese (ch) and English (en)*** (more languages support will come soon), while `RerankerModel` supports ***Chinese (ch), English (en), Japanese (ja) and Korean (ko)***.
现有的单个语义表征模型在双语和跨语种场景中常常表现不佳,特别是在中文、英文及其跨语种任务中。`BCEmbedding`充分利用有道翻译引擎的优势,实现只需一个模型就可以在单语、双语和跨语种场景中表现出卓越的性能。
`EmbeddingModel`支持***中文和英文***(之后会支持更多语种);`RerankerModel`支持***中文,英文,日文和韩文***。
## 💡 Key Features
- **Bilingual and Crosslingual Proficiency**: Powered by Youdao's translation engine, excelling in Chinese, English and their crosslingual retrieval task, with upcoming support for additional languages.
- **RAG-Optimized**: Tailored for diverse RAG tasks including **translation, summarization, and question answering**, ensuring accurate **query understanding**. See <a href=#rag-evaluations-in-llamaindex>RAG Evaluations in LlamaIndex</a>.
- **Efficient and Precise Retrieval**: Dual-encoder for efficient retrieval of `EmbeddingModel` in first stage, and cross-encoder of `RerankerModel` for enhanced precision and deeper semantic analysis in second stage.
- **Broad Domain Adaptability**: Trained on diverse datasets for superior performance across various fields.
- **User-Friendly Design**: Instruction-free, versatile use for multiple tasks without specifying query instruction for each task.
- **Meaningful Reranking Scores**: `RerankerModel` provides relevant scores to improve result quality and optimize large language model performance.
- **Proven in Production**: Successfully implemented and validated in Youdao's products.
- **双语和跨语种能力**:基于有道翻译引擎的强大能力,我们的`BCEmbedding`具备强大的中英双语和跨语种语义表征能力。
- **RAG适配**:面向RAG做了针对性优化,可以适配大多数相关任务,比如**翻译,摘要,问答**等。此外,针对**问题理解**(query understanding)也做了针对优化,详见 <a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>。
- **高效且精确的语义检索**:`EmbeddingModel`采用双编码器,可以在第一阶段实现高效的语义检索。`RerankerModel`采用交叉编码器,可以在第二阶段实现更高精度的语义顺序精排。
- **更好的领域泛化性**:为了在更多场景实现更好的效果,我们收集了多种多样的领域数据。
- **用户友好**:语义检索时不需要特殊指令前缀。也就是,你不需要为各种任务绞尽脑汁设计指令前缀。
- **有意义的重排序分数**:`RerankerModel`可以提供有意义的语义相关性分数(不仅仅是排序),可以用于过滤无意义文本片段,提高大模型生成效果。
- **产品化检验**:`BCEmbedding`已经被有道众多真实产品检验。
## 🚀 Latest Updates
- ***2024-01-03***: **Model Releases** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) and [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) are available.
- ***2024-01-03***: **Eval Datasets** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - Evaluate the performence of RAG, using [LlamaIndex](https://github.com/run-llama/llama_index).
- ***2024-01-03***: **Eval Datasets** [[Details](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - Evaluate the performence of crosslingual semantic representation, using [MTEB](https://github.com/embeddings-benchmark/mteb).
- ***2024-01-03***: **模型发布** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)和[bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)已发布.
- ***2024-01-03***: **RAG评测数据** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - 基于[LlamaIndex](https://github.com/run-llama/llama_index)的RAG评测数据已发布。
- ***2024-01-03***: **跨语种语义表征评测数据** [[详情](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - 基于[MTEB](https://github.com/embeddings-benchmark/mteb)的跨语种评测数据已发布.
## 🍎 Model List
| Model Name | Model Type | Languages | Parameters | Weights |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|
| bce-embedding-base_v1 | `EmbeddingModel` | ch, en | 279M | [download](https://huggingface.co/maidalun1020/bce-embedding-base_v1) |
| bce-reranker-base_v1 | `RerankerModel` | ch, en, ja, ko | 279M | [download](https://huggingface.co/maidalun1020/bce-reranker-base_v1) |
## 📖 Manual
### Installation
First, create a conda environment and activate it.
```bash
conda create --name bce python=3.10 -y
conda activate bce
```
Then install `BCEmbedding` for minimal installation:
```bash
pip install BCEmbedding==0.1.1
```
Or install from source:
```bash
git clone [email protected]:netease-youdao/BCEmbedding.git
cd BCEmbedding
pip install -v -e .
```
### Quick Start
#### 1. Based on `BCEmbedding`
Use `EmbeddingModel`, and `cls` [pooler](./BCEmbedding/models/embedding.py#L24) is default.
```python
from BCEmbedding import EmbeddingModel
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
model = EmbeddingModel(model_name_or_path="maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences)
```
Use `RerankerModel` to calculate relevant scores and rerank:
```python
from BCEmbedding import RerankerModel
# your query and corresponding passages
query = 'input_query'
passages = ['passage_0', 'passage_1', ...]
# construct sentence pairs
sentence_pairs = [[query, passage] for passage in passages]
# init reranker model
model = RerankerModel(model_name_or_path="maidalun1020/bce-reranker-base_v1")
# method 0: calculate scores of sentence pairs
scores = model.compute_score(sentence_pairs)
# method 1: rerank passages
rerank_results = model.rerank(query, passages)
```
NOTE:
- In [`RerankerModel.rerank`](./BCEmbedding/models/reranker.py#L137) method, we provide an advanced preproccess that we use in production for making `sentence_pairs`, when "passages" are very long.
#### 2. Based on `transformers`
For `EmbeddingModel`:
```python
from transformers import AutoModel, AutoTokenizer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-embedding-base_v1')
model = AutoModel.from_pretrained('maidalun1020/bce-embedding-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(self.device) for k, v in inputs.items()}
# get embeddings
outputs = model(**inputs_on_device, return_dict=True)
embeddings = outputs.last_hidden_state[:, 0] # cls pooler
embeddings = embeddings / embeddings.norm(dim=1, keepdim=True) # normalize
```
For `RerankerModel`:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-reranker-base_v1')
model = AutoModelForSequenceClassification.from_pretrained('maidalun1020/bce-reranker-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentence_pairs, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(device) for k, v in inputs.items()}
# calculate scores
scores = model(**inputs_on_device, return_dict=True).logits.view(-1,).float()
scores = torch.sigmoid(scores)
```
#### 3. Based on `sentence_transformers`
For `EmbeddingModel`:
```python
from sentence_transformers import SentenceTransformer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
## New update for sentence-trnasformers. So clean up your "`SENTENCE_TRANSFORMERS_HOME`/maidalun1020_bce-embedding-base_v1" or "~/.cache/torch/sentence_transformers/maidalun1020_bce-embedding-base_v1" first for downloading new version.
model = SentenceTransformer("maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences, normalize_embeddings=True)
```
For `RerankerModel`:
```python
from sentence_transformers import CrossEncoder
# init reranker model
model = CrossEncoder('maidalun1020/bce-reranker-base_v1', max_length=512)
# calculate scores of sentence pairs
scores = model.predict(sentence_pairs)
```
### Integrations for RAG Frameworks
#### 1. Used in `langchain`
```python
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.vectorstores.utils import DistanceStrategy
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_name = 'maidalun1020/bce-embedding-base_v1'
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'batch_size': 64, 'normalize_embeddings': True, 'show_progress_bar': False}
embed_model = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
# example #1. extract embeddings
query_embedding = embed_model.embed_query(query)
passages_embeddings = embed_model.embed_documents(passages)
# example #2. langchain retriever example
faiss_vectorstore = FAISS.from_texts(passages, embed_model, distance_strategy=DistanceStrategy.MAX_INNER_PRODUCT)
retriever = faiss_vectorstore.as_retriever(search_type="similarity", search_kwargs={"score_threshold": 0.5, "k": 3})
related_passages = retriever.get_relevant_documents(query)
```
#### 2. Used in `llama_index`
```python
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index import VectorStoreIndex, ServiceContext, SimpleDirectoryReader
from llama_index.node_parser import SimpleNodeParser
from llama_index.llms import OpenAI
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_args = {'model_name': 'maidalun1020/bce-embedding-base_v1', 'max_length': 512, 'embed_batch_size': 64, 'device': 'cuda'}
embed_model = HuggingFaceEmbedding(**model_args)
# example #1. extract embeddings
query_embedding = embed_model.get_query_embedding(query)
passages_embeddings = embed_model.get_text_embedding_batch(passages)
# example #2. rag example
llm = OpenAI(model='gpt-3.5-turbo-0613', api_key=os.environ.get('OPENAI_API_KEY'), api_base=os.environ.get('OPENAI_BASE_URL'))
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
documents = SimpleDirectoryReader(input_files=["BCEmbedding/tools/eval_rag/eval_pdfs/Comp_en_llama2.pdf"]).load_data()
node_parser = SimpleNodeParser.from_defaults(chunk_size=512)
nodes = node_parser.get_nodes_from_documents(documents[0:36])
index = VectorStoreIndex(nodes, service_context=service_context)
query_engine = index.as_query_engine()
response = query_engine.query("What is llama?")
```
## ⚙️ Evaluation
### Evaluate Semantic Representation by MTEB
We provide evaluateion tools for `embedding` and `reranker` models, based on [MTEB](https://github.com/embeddings-benchmark/mteb) and [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB).
我们基于[MTEB](https://github.com/embeddings-benchmark/mteb)和[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB),提供`embedding`和`reranker`模型的语义表征评测工具。
#### 1. Embedding Models
Just run following cmd to evaluate `your_embedding_model` (e.g. `maidalun1020/bce-embedding-base_v1`) in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_embedding_model`(比如,`maidalun1020/bce-embedding-base_v1`)。评测任务将会在**双语和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path maidalun1020/bce-embedding-base_v1 --pooler cls
```
The total evaluation tasks contain ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"**.
评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的 ***114个数据集***。
***NOTE:***
- **All models are evaluated in their recommended pooling method (`pooler`)**.
- `mean` pooler: "jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large" and "gte-large".
- `cls` pooler: Other models.
- "jina-embeddings-v2-base-en" model should be loaded with `trust_remote_code`.
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path {moka-ai/m3e-base | moka-ai/m3e-large} --pooler mean
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path jinaai/jina-embeddings-v2-base-en --pooler mean --trust_remote_code
```
***注意:***
- 所有模型的评测采用各自推荐的`pooler`。"jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large"和"gte-large"的 `pooler`采用`mean`,其他模型的`pooler`采用`cls`.
- "jina-embeddings-v2-base-en"模型在载入时需要`trust_remote_code`。
#### 2. Reranker Models
Run following cmd to evaluate `your_reranker_model` (e.g. "maidalun1020/bce-reranker-base_v1") in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_reranker_model`(比如,`maidalun1020/bce-reranker-base_v1`)。评测任务将会在 **双语种和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_reranker_mteb.py --model_name_or_path maidalun1020/bce-reranker-base_v1
```
The evaluation tasks contain ***12 datastes*** of **"Reranking"**.
评测包含 **"Reranking"** 任务的 ***12个数据集***。
#### 3. Metrics Visualization Tool
We proveide a one-click script to sumarize evaluation results of `embedding` and `reranker` models as [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md) and [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
我们提供了`embedding`和`reranker`模型的指标可视化一键脚本,输出一个markdown文件,详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)和[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)。
```bash
python BCEmbedding/evaluation/mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir}
```
### Evaluate RAG by LlamaIndex
[LlamaIndex](https://github.com/run-llama/llama_index) is a famous data framework for LLM-based applications, particularly in RAG. Recently, the [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our `BCEmbedding`.
[LlamaIndex](https://github.com/run-llama/llama_index)是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证`BCEmbedding`在RAG中的效果。
First, install LlamaIndex:
```bash
pip install llama-index==0.9.22
```
#### 1. Metrics Definition
- Hit Rate:
Hit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. ***The larger, the better.***
- Mean Reciprocal Rank (MRR):
For each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. ***The larger, the better.***
- 命中率(Hit Rate)
命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。***该指标越大越好。***
- 平均倒数排名(Mean Reciprocal Rank,MRR)
对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。***该指标越大越好。***
#### 2. Reproduce [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
In order to compare our `BCEmbedding` with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our `BCEmbedding`:
为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将`BCEmbedding`与其他embedding和reranker模型进行对比分析:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_reproduce.py
```
Then, sumarize the evaluation results by:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_reproduce_results
```
Results Reproduced from the LlamaIndex Blog can be checked in ***[Reproduced Summary of RAG Evaluation](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***, with some obvious ***conclusions***:
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- ***The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA.***
输出的指标汇总详见 ***[LlamaIndex RAG评测结果复现](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***。从该复现结果中,可以看出:
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`比其他embedding模型效果都要好。
- 在固定embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
#### 3. Broad Domain Adaptability
The evaluation of [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) is **monolingual, small amount of data, and specific domain** (just including "llama2" paper). In order to evaluate the **broad domain adaptability, bilingual and crosslingual capability**, we follow the blog to build a multiple domains evaluation dataset (includding "Computer Science", "Physics", "Biology", "Economics", "Math", and "Quantitative Finance"), named [CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset), **by OpenAI `gpt-4-1106-preview` for high quality**.
在上述的[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)的评测数据只用了“llama2”这一篇文章,该评测是 **单语种,小数据量,特定领域** 的。为了兼容更真实更广的用户使用场景,评测算法模型的 **领域泛化性,双语和跨语种能力**,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)。**为了保证构建数据的高质量,我们采用OpenAI的`gpt-4-1106-preview`。**
First, run following cmd to evaluate the most popular and powerful embedding and reranker models:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_multiple_domains.py
```
Then, run the following script to sumarize the evaluation results:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_results
```
The summary of multiple domains evaluations can be seen in <a href=#1-multiple-domains-scenarios>Multiple Domains Scenarios</a>.
## 📈 Leaderboard
### Semantic Representation Evaluations in MTEB
#### 1. Embedding Models
| Model | Dimensions | Pooler | Instructions | Retrieval (47) | STS (19) | PairClassification (5) | Classification (21) | Reranking (12) | Clustering (15) | ***AVG*** (119) |
|:--------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| bge-base-en-v1.5 | 768 | `cls` | Need | 37.14 | 55.06 | 75.45 | 59.73 | 43.00 | 37.74 | 47.19 |
| bge-base-zh-v1.5 | 768 | `cls` | Need | 47.63 | 63.72 | 77.40 | 63.38 | 54.95 | 32.56 | 53.62 |
| bge-large-en-v1.5 | 1024 | `cls` | Need | 37.18 | 54.09 | 75.00 | 59.24 | 42.47 | 37.32 | 46.80 |
| bge-large-zh-v1.5 | 1024 | `cls` | Need | 47.58 | 64.73 | 79.14 | 64.19 | 55.98 | 33.26 | 54.23 |
| e5-large-v2 | 1024 | `mean` | Need | 35.98 | 55.23 | 75.28 | 59.53 | 42.12 | 36.51 | 46.52 |
| gte-large | 1024 | `mean` | Free | 36.68 | 55.22 | 74.29 | 57.73 | 42.44 | 38.51 | 46.67 |
| gte-large-zh | 1024 | `cls` | Free | 41.15 | 64.62 | 77.58 | 62.04 | 55.62 | 33.03 | 51.51 |
| jina-embeddings-v2-base-en | 768 | `mean` | Free | 31.58 | 54.28 | 74.84 | 58.42 | 41.16 | 34.67 | 44.29 |
| m3e-base | 768 | `mean` | Free | 46.29 | 63.93 | 71.84 | 64.08 | 52.38 | 37.84 | 53.54 |
| m3e-large | 1024 | `mean` | Free | 34.85 | 59.74 | 67.69 | 60.07 | 48.99 | 31.62 | 46.78 |
| multilingual-e5-base | 768 | `mean` | Need | 54.73 | 65.49 | 76.97 | 69.72 | 55.01 | 38.44 | 58.34 |
| multilingual-e5-large | 1024 | `mean` | Need | 56.76 | 66.79 | 78.80 | 71.61 | 56.49 | 43.09 | 60.50 |
| ***bce-embedding-base_v1*** | 768 | `cls` | Free | 57.60 | 65.73 | 74.96 | 69.00 | 57.29 | 38.95 | 59.43 |
***NOTE:***
- Our ***bce-embedding-base_v1*** outperforms other opensource embedding models with comparable model size.
- ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- The [crosslingual evaluation datasets](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py) we released belong to `Retrieval` task.
- More evaluation details please check [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md).
***要点:***
- 对比其他开源的相同规模的embedding模型,***bce-embedding-base_v1*** 表现最好,效果比最好的large模型稍差。
- 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的共 ***114个数据集***。
- 我们开源的[跨语种语义表征评测数据](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)属于`Retrieval`任务。
- 更详细的评测结果详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)。
#### 2. Reranker Models
| Model | Reranking (12) | ***AVG*** (12) |
| :--------------------------------- | :-------------: | :--------------------: |
| bge-reranker-base | 59.04 | 59.04 |
| bge-reranker-large | 60.86 | 60.86 |
| ***bce-reranker-base_v1*** | **61.29** | ***61.29*** |
***NOTE:***
- Our ***bce-reranker-base_v1*** outperforms other opensource reranker models.
- ***12 datastes*** of **"Reranking"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- More evaluation details please check [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
***要点:***
- ***bce-reranker-base_v1*** 优于其他开源reranker模型。
- 评测包含 **"Reranking"** 任务的 ***12个数据集***。
- 更详细的评测结果详见[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)
### RAG Evaluations in LlamaIndex
#### 1. Multiple Domains Scenarios

***NOTE:***
- Evaluated in **`["en", "zh", "en-zh", "zh-en"]` setting**.
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- **The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA**.
***要点:***
- 评测是在`["en", "zh", "en-zh", "zh-en"]`设置下。
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`优于其他Embedding模型,包括开源和闭源。
- 在固定Embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好,包括开源和闭源。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
## 🛠 Youdao's BCEmbedding API
For users who prefer a hassle-free experience without the need to download and configure the model on their own systems, `BCEmbedding` is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at [Youdao BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html). Here, you'll find all the necessary guidance to easily implement `BCEmbedding` across a variety of use cases, ensuring a smooth and effective integration for optimal results.
对于那些更喜欢直接调用api的用户,有道提供方便的`BCEmbedding`调用api。该方式是一种简化和高效的方式,将`BCEmbedding`集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见[有道BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html)。
## 🧲 WeChat Group
Welcome to scan the QR code below and join the WeChat group.
欢迎大家扫码加入官方微信交流群。

## ✏️ Citation
If you use `BCEmbedding` in your research or project, please feel free to cite and star it:
如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~
```
@misc{youdao_bcembedding_2023,
title={BCEmbedding: Bilingual and Crosslingual Embedding for RAG},
author={NetEase Youdao, Inc.},
year={2023},
howpublished={\url{https://github.com/netease-youdao/BCEmbedding}}
}
```
## 🔐 License
`BCEmbedding` is licensed under [Apache 2.0 License](https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE)
## 🔗 Related Links
[Netease Youdao - QAnything](https://github.com/netease-youdao/qanything)
[FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)
[MTEB](https://github.com/embeddings-benchmark/mteb)
[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
[LLama Index](https://github.com/run-llama/llama_index) | [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
|
{"language": ["en", "zh"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"]}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | 46,747 |
Cheselle/finetuned-arctic
|
Cheselle
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:600",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-m",
"base_model:finetune:Snowflake/snowflake-arctic-embed-m",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-23T10:23:25Z |
2024-09-23T10:24:01+00:00
| 9 | 0 |
---
base_model: Snowflake/snowflake-arctic-embed-m
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:600
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What are the existing regulatory safety requirements mentioned
in the context for medical devices?
sentences:
- "47 \nAppendix A. Primary GAI Considerations \nThe following primary considerations\
\ were derived as overarching themes from the GAI PWG \nconsultation process.\
\ These considerations (Governance, Pre-Deployment Testing, Content Provenance,\
\ \nand Incident Disclosure) are relevant for voluntary use by any organization\
\ designing, developing, and \nusing GAI and also inform the Actions to Manage\
\ GAI risks. Information included about the primary \nconsiderations is not exhaustive,\
\ but highlights the most relevant topics derived from the GAI PWG. \nAcknowledgments:\
\ These considerations could not have been surfaced without the helpful analysis\
\ and \ncontributions from the community and NIST staff GAI PWG leads: George Awad,\
\ Luca Belli, Harold Booth, \nMat Heyman, Yooyoung Lee, Mark Pryzbocki, Reva Schwartz,\
\ Martin Stanley, and Kyra Yee. \nA.1. Governance \nA.1.1. Overview \nLike any\
\ other technology system, governance principles and techniques can be used to\
\ manage risks"
- "behavior or outcomes of a GAI model or system, how they could occur, and stress\
\ test safeguards”. AI \nred-teaming can be performed before or after AI models\
\ or systems are made available to the broader \npublic; this section focuses\
\ on red-teaming in pre-deployment contexts. \nThe quality of AI red-teaming\
\ outputs is related to the background and expertise of the AI red team \nitself.\
\ Demographically and interdisciplinarily diverse AI red teams can be used to\
\ identify flaws in the \nvarying contexts where GAI will be used. For best results,\
\ AI red teams should demonstrate domain \nexpertise, and awareness of socio-cultural\
\ aspects within the deployment context. AI red-teaming results \nshould be given\
\ additional analysis before they are incorporated into organizational governance\
\ and \ndecision making, policy and procedural updates, and AI risk management\
\ efforts. \nVarious types of AI red-teaming may be appropriate, depending on the\
\ use case: \n•"
- "SECTION TITLE\n \n \n \n \n \n \nApplying The Blueprint for an AI Bill of Rights\
\ \nRELATIONSHIP TO EXISTING LAW AND POLICY\nThere are regulatory safety requirements\
\ for medical devices, as well as sector-, population-, or technology-spe\ncific\
\ privacy and security protections. Ensuring some of the additional protections\
\ proposed in this framework \nwould require new laws to be enacted or new policies\
\ and practices to be adopted. In some cases, exceptions to \nthe principles described\
\ in the Blueprint for an AI Bill of Rights may be necessary to comply with existing\
\ law, \nconform to the practicalities of a specific use case, or balance competing\
\ public interests. In particular, law \nenforcement, and other regulatory contexts\
\ may require government actors to protect civil rights, civil liberties, \nand\
\ privacy in a manner consistent with, but using alternate mechanisms to, the\
\ specific principles discussed in"
- source_sentence: What steps should be taken to adapt processes based on findings
from incidents involving harmful content generation?
sentences:
- "some cases may include personal data. The use of personal data for GAI training\
\ raises risks to widely \naccepted privacy principles, including to transparency,\
\ individual participation (including consent), and \npurpose specification. For\
\ example, most model developers do not disclose specific data sources on \nwhich\
\ models were trained, limiting user awareness of whether personally identifiably\
\ information (PII) \nwas trained on and, if so, how it was collected. \nModels\
\ may leak, generate, or correctly infer sensitive information about individuals.\
\ For example, \nduring adversarial attacks, LLMs have revealed sensitive information\
\ (from the public domain) that was \nincluded in their training data. This problem\
\ has been referred to as data memorization, and may pose \nexacerbated privacy\
\ risks even for data present only in a small number of training samples. \n\
In addition to revealing sensitive information in GAI training data, GAI models\
\ may be able to correctly"
- "performance, feedback received, and improvements made. \nHarmful Bias and Homogenization\
\ \nMG-4.2-002 \nPractice and follow incident response plans for addressing the\
\ generation of \ninappropriate or harmful content and adapt processes based on\
\ findings to \nprevent future occurrences. Conduct post-mortem analyses of incidents\
\ with \nrelevant AI Actors, to understand the root causes and implement preventive\
\ \nmeasures. \nHuman-AI Configuration; \nDangerous, Violent, or Hateful \nContent\
\ \nMG-4.2-003 Use visualizations or other methods to represent GAI model behavior\
\ to ease \nnon-technical stakeholders understanding of GAI system functionality.\
\ \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Design, AI Development,\
\ Affected Individuals and Communities, End-Users, Operation and \nMonitoring,\
\ TEVV \n \nMANAGE 4.3: Incidents and errors are communicated to relevant AI Actors,\
\ including affected communities. Processes for tracking,"
- "AI Actor Tasks: AI Deployment, AI Design, AI Impact Assessment, Affected Individuals\
\ and Communities, Domain Experts, End-\nUsers, Human Factors, Operation and Monitoring\
\ \n \nMEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated\
\ during the MAP function are selected for \nimplementation starting with the\
\ most significant AI risks. The risks or trustworthiness characteristics that\
\ will not – or cannot – be \nmeasured are properly documented. \nAction ID \n\
Suggested Action \nGAI Risks \nMS-1.1-001 Employ methods to trace the origin and\
\ modifications of digital content. \nInformation Integrity \nMS-1.1-002 \nIntegrate\
\ tools designed to analyze content provenance and detect data \nanomalies, verify\
\ the authenticity of digital signatures, and identify patterns \nassociated with\
\ misinformation or manipulation. \nInformation Integrity \nMS-1.1-003 \nDisaggregate\
\ evaluation metrics by demographic factors to identify any"
- source_sentence: What are the Principles of Artificial Intelligence Ethics developed
by the US Intelligence Community intended to guide?
sentences:
- "Evaluation data; Ethical considerations; Legal and regulatory requirements. \n\
Information Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI\
\ Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring,\
\ TEVV \n \nMEASURE 2.10: Privacy risk of the AI system – as identified in the\
\ MAP function – is examined and documented. \nAction ID \nSuggested Action \n\
GAI Risks \nMS-2.10-001 \nConduct AI red-teaming to assess issues such as: Outputting\
\ of training data \nsamples, and subsequent reverse engineering, model extraction,\
\ and \nmembership inference risks; Revealing biometric, confidential, copyrighted,\
\ \nlicensed, patented, personal, proprietary, sensitive, or trade-marked information;\
\ \nTracking or revealing location information of users or members of training\
\ \ndatasets. \nHuman-AI Configuration; \nInformation Integrity; Intellectual \n\
Property \nMS-2.10-002 \nEngage directly with end-users and other stakeholders\
\ to understand their"
- "8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced,\
\ Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining,\
\ and operating (running inference on) GAI systems are resource-intensive activities,\
\ \nwith potentially large energy and environmental footprints. Energy and carbon\
\ emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training,\
\ fine-tuning, inference), the modality of the \ncontent, hardware used, and type\
\ of task or application. \nCurrent estimates suggest that training a single transformer\
\ LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco\
\ and New York. In a study comparing energy consumption and carbon \nemissions\
\ for LLM inference, generative tasks (e.g., text summarization) were found to\
\ be more energy- \nand carbon-intensive than discriminative or non-generative\
\ tasks (e.g., text classification)."
- "security and defense activities.21 Similarly, the U.S. Intelligence Community\
\ (IC) has developed the Principles \nof Artificial Intelligence Ethics for the\
\ Intelligence Community to guide personnel on whether and how to \ndevelop and\
\ use AI in furtherance of the IC's mission, as well as an AI Ethics Framework\
\ to help implement \nthese principles.22\nThe National Science Foundation (NSF)\
\ funds extensive research to help foster the \ndevelopment of automated systems\
\ that adhere to and advance their safety, security and \neffectiveness. Multiple\
\ NSF programs support research that directly addresses many of these principles:\
\ \nthe National AI Research Institutes23 support research on all aspects of safe,\
\ trustworthy, fair, and explainable \nAI algorithms and systems; the Cyber Physical\
\ Systems24 program supports research on developing safe \nautonomous and cyber\
\ physical systems with AI components; the Secure and Trustworthy Cyberspace25"
- source_sentence: How does Hagan (2024) propose to establish quality standards for
AI responses to legal problems?
sentences:
- "actually occurring, or large-scale risks could occur); and broad GAI negative\
\ risks, \nincluding: Immature safety or risk cultures related to AI and GAI design,\
\ \ndevelopment and deployment, public information integrity risks, including\
\ impacts \non democratic processes, unknown long-term performance characteristics\
\ of GAI. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; CBRN\
\ \nInformation or Capabilities \nGV-1.3-007 Devise a plan to halt development\
\ or deployment of a GAI system that poses \nunacceptable negative risk. \nCBRN\
\ Information and Capability; \nInformation Security; Information \nIntegrity\
\ \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.4: The risk management\
\ process and its outcomes are established through transparent policies, procedures,\
\ and other \ncontrols based on organizational risk priorities. \nAction ID \n\
Suggested Action \nGAI Risks \nGV-1.4-001 \nEstablish policies and mechanisms\
\ to prevent GAI systems from generating"
- "gists, advocates, journalists, policymakers, and communities in the United States\
\ and around the world. This \ntechnical companion is intended to be used as a\
\ reference by people across many circumstances – anyone \nimpacted by automated\
\ systems, and anyone developing, designing, deploying, evaluating, or making\
\ policy to \ngovern the use of an automated system. \nEach principle is accompanied\
\ by three supplemental sections: \n1\n2\nWHY THIS PRINCIPLE IS IMPORTANT: \n\
This section provides a brief summary of the problems that the principle seeks\
\ to address and protect against, including \nillustrative examples. \nWHAT SHOULD\
\ BE EXPECTED OF AUTOMATED SYSTEMS: \n• The expectations for automated systems\
\ are meant to serve as a blueprint for the development of additional technical\n\
standards and practices that should be tailored for particular sectors and contexts.\n\
• This section outlines practical steps that can be implemented to realize the\
\ vision of the Blueprint for an AI Bill of Rights. The"
- "Greshake, K. et al. (2023) Not what you've signed up for: Compromising Real-World\
\ LLM-Integrated \nApplications with Indirect Prompt Injection. arXiv. https://arxiv.org/abs/2302.12173\
\ \nHagan, M. (2024) Good AI Legal Help, Bad AI Legal Help: Establishing quality\
\ standards for responses to \npeople’s legal problem stories. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4696936\
\ \nHaran, R. (2023) Securing LLM Systems Against Prompt Injection. NVIDIA. \n\
https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/\
\ \nInformation Technology Industry Council (2024) Authenticating AI-Generated\
\ Content. \nhttps://www.itic.org/policy/ITI_AIContentAuthorizationPolicy_122123.pdf\
\ \nJain, S. et al. (2023) Algorithmic Pluralism: A Structural Approach To Equal\
\ Opportunity. arXiv. \nhttps://arxiv.org/pdf/2305.08157 \nJi, Z. et al (2023)\
\ Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 55,\
\ 12, \nArticle 248. https://doi.org/10.1145/3571730"
- source_sentence: How can information security measures be applied to maintain the
integrity and confidentiality of GAI models and systems?
sentences:
- "using: field testing with sub-group populations to determine likelihood of \n\
exposure to generated content exhibiting harmful bias, AI red-teaming with \n\
counterfactual and low-context (e.g., “leader,” “bad guys”) prompts. For ML \n\
pipelines or business processes with categorical or numeric outcomes that rely\
\ \non GAI, apply general fairness metrics (e.g., demographic parity, equalized\
\ odds, \nequal opportunity, statistical hypothesis tests), to the pipeline or\
\ business \noutcome where appropriate; Custom, context-specific metrics developed\
\ in \ncollaboration with domain experts and affected communities; Measurements\
\ of \nthe prevalence of denigration in generated content in deployment (e.g.,\
\ sub-\nsampling a fraction of traffic and manually annotating denigrating content).\
\ \nHarmful Bias and Homogenization; \nDangerous, Violent, or Hateful \nContent\
\ \nMS-2.11-003 \nIdentify the classes of individuals, groups, or environmental\
\ ecosystems which"
- "27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess\
\ intellectual property, \nand privacy, risks, including to examine whether use\
\ of proprietary or sensitive \ntraining data is consistent with applicable laws.\
\ \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight,\
\ Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood\
\ and magnitude of each identified impact (both potentially beneficial and harmful)\
\ based on expected use, past \nuses of AI systems in similar contexts, public\
\ incident reports, feedback from those external to the team that developed or\
\ deployed \nthe AI system, or other data are identified and documented. \nAction\
\ ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content\
\ provenance (e.g., probing a system's synthetic \ndata generation capabilities\
\ for potential misuse or vulnerabilities. \nInformation Integrity; Information\
\ \nSecurity \nMP-5.1-002"
- "vulnerabilities in systems (hardware, software, data) and write code to exploit\
\ them. Sophisticated threat \nactors might further these risks by developing\
\ GAI-powered security co-pilots for use in several parts of \nthe attack chain,\
\ including informing attackers on how to proactively evade threat detection and\
\ escalate \nprivileges after gaining system access. \nInformation security for\
\ GAI models and systems also includes maintaining availability of the GAI system\
\ \nand the integrity and (when applicable) the confidentiality of the GAI code,\
\ training data, and model \nweights. To identify and secure potential attack\
\ points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4,\
\ to be published."
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.81
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.96
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.99
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.81
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31999999999999995
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19799999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.81
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.96
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.99
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9167865159386339
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8887499999999998
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8887499999999998
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.81
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.96
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.99
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.81
name: Dot Precision@1
- type: dot_precision@3
value: 0.31999999999999995
name: Dot Precision@3
- type: dot_precision@5
value: 0.19799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.09999999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.81
name: Dot Recall@1
- type: dot_recall@3
value: 0.96
name: Dot Recall@3
- type: dot_recall@5
value: 0.99
name: Dot Recall@5
- type: dot_recall@10
value: 1.0
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9167865159386339
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8887499999999998
name: Dot Mrr@10
- type: dot_map@100
value: 0.8887499999999998
name: Dot Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision e2b128b9fa60c82b4585512b33e1544224ffff42 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Cheselle/finetuned-arctic")
# Run inference
sentences = [
'How can information security measures be applied to maintain the integrity and confidentiality of GAI models and systems?',
'vulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published.',
"27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.81 |
| cosine_accuracy@3 | 0.96 |
| cosine_accuracy@5 | 0.99 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.81 |
| cosine_precision@3 | 0.32 |
| cosine_precision@5 | 0.198 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.81 |
| cosine_recall@3 | 0.96 |
| cosine_recall@5 | 0.99 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9168 |
| cosine_mrr@10 | 0.8887 |
| **cosine_map@100** | **0.8887** |
| dot_accuracy@1 | 0.81 |
| dot_accuracy@3 | 0.96 |
| dot_accuracy@5 | 0.99 |
| dot_accuracy@10 | 1.0 |
| dot_precision@1 | 0.81 |
| dot_precision@3 | 0.32 |
| dot_precision@5 | 0.198 |
| dot_precision@10 | 0.1 |
| dot_recall@1 | 0.81 |
| dot_recall@3 | 0.96 |
| dot_recall@5 | 0.99 |
| dot_recall@10 | 1.0 |
| dot_ndcg@10 | 0.9168 |
| dot_mrr@10 | 0.8887 |
| dot_map@100 | 0.8887 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 600 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 600 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.75 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 177.81 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the title of the publication related to Artificial Intelligence Risk Management by NIST?</code> | <code>NIST Trustworthy and Responsible AI <br>NIST AI 600-1 <br>Artificial Intelligence Risk Management <br>Framework: Generative Artificial <br>Intelligence Profile <br> <br> <br> <br>This publication is available free of charge from: <br>https://doi.org/10.6028/NIST.AI.600-1</code> |
| <code>Where can the NIST AI 600-1 publication be accessed for free?</code> | <code>NIST Trustworthy and Responsible AI <br>NIST AI 600-1 <br>Artificial Intelligence Risk Management <br>Framework: Generative Artificial <br>Intelligence Profile <br> <br> <br> <br>This publication is available free of charge from: <br>https://doi.org/10.6028/NIST.AI.600-1</code> |
| <code>What is the title of the publication released by NIST in July 2024 regarding artificial intelligence?</code> | <code>NIST Trustworthy and Responsible AI <br>NIST AI 600-1 <br>Artificial Intelligence Risk Management <br>Framework: Generative Artificial <br>Intelligence Profile <br> <br> <br> <br>This publication is available free of charge from: <br>https://doi.org/10.6028/NIST.AI.600-1 <br> <br>July 2024 <br> <br> <br> <br> <br>U.S. Department of Commerce <br>Gina M. Raimondo, Secretary <br>National Institute of Standards and Technology <br>Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_map@100 |
|:------:|:----:|:--------------:|
| 1.0 | 30 | 0.8699 |
| 1.6667 | 50 | 0.8879 |
| 2.0 | 60 | 0.8887 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision e2b128b9fa60c82b4585512b33e1544224ffff42 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Cheselle/finetuned-arctic")
# Run inference
sentences = [
'How can information security measures be applied to maintain the integrity and confidentiality of GAI models and systems?',
'vulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published.',
"27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.81 |
| cosine_accuracy@3 | 0.96 |
| cosine_accuracy@5 | 0.99 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.81 |
| cosine_precision@3 | 0.32 |
| cosine_precision@5 | 0.198 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.81 |
| cosine_recall@3 | 0.96 |
| cosine_recall@5 | 0.99 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9168 |
| cosine_mrr@10 | 0.8887 |
| **cosine_map@100** | **0.8887** |
| dot_accuracy@1 | 0.81 |
| dot_accuracy@3 | 0.96 |
| dot_accuracy@5 | 0.99 |
| dot_accuracy@10 | 1.0 |
| dot_precision@1 | 0.81 |
| dot_precision@3 | 0.32 |
| dot_precision@5 | 0.198 |
| dot_precision@10 | 0.1 |
| dot_recall@1 | 0.81 |
| dot_recall@3 | 0.96 |
| dot_recall@5 | 0.99 |
| dot_recall@10 | 1.0 |
| dot_ndcg@10 | 0.9168 |
| dot_mrr@10 | 0.8887 |
| dot_map@100 | 0.8887 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 600 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 600 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.75 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 177.81 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the title of the publication related to Artificial Intelligence Risk Management by NIST?</code> | <code>NIST Trustworthy and Responsible AI <br>NIST AI 600-1 <br>Artificial Intelligence Risk Management <br>Framework: Generative Artificial <br>Intelligence Profile <br> <br> <br> <br>This publication is available free of charge from: <br>https://doi.org/10.6028/NIST.AI.600-1</code> |
| <code>Where can the NIST AI 600-1 publication be accessed for free?</code> | <code>NIST Trustworthy and Responsible AI <br>NIST AI 600-1 <br>Artificial Intelligence Risk Management <br>Framework: Generative Artificial <br>Intelligence Profile <br> <br> <br> <br>This publication is available free of charge from: <br>https://doi.org/10.6028/NIST.AI.600-1</code> |
| <code>What is the title of the publication released by NIST in July 2024 regarding artificial intelligence?</code> | <code>NIST Trustworthy and Responsible AI <br>NIST AI 600-1 <br>Artificial Intelligence Risk Management <br>Framework: Generative Artificial <br>Intelligence Profile <br> <br> <br> <br>This publication is available free of charge from: <br>https://doi.org/10.6028/NIST.AI.600-1 <br> <br>July 2024 <br> <br> <br> <br> <br>U.S. Department of Commerce <br>Gina M. Raimondo, Secretary <br>National Institute of Standards and Technology <br>Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_map@100 |
|:------:|:----:|:--------------:|
| 1.0 | 30 | 0.8699 |
| 1.6667 | 50 | 0.8879 |
| 2.0 | 60 | 0.8887 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Snowflake/snowflake-arctic-embed-m", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100", "dot_accuracy@1", "dot_accuracy@3", "dot_accuracy@5", "dot_accuracy@10", "dot_precision@1", "dot_precision@3", "dot_precision@5", "dot_precision@10", "dot_recall@1", "dot_recall@3", "dot_recall@5", "dot_recall@10", "dot_ndcg@10", "dot_mrr@10", "dot_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:600", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "What are the existing regulatory safety requirements mentioned in the context for medical devices?", "sentences": ["47 \nAppendix A. Primary GAI Considerations \nThe following primary considerations were derived as overarching themes from the GAI PWG \nconsultation process. These considerations (Governance, Pre-Deployment Testing, Content Provenance, \nand Incident Disclosure) are relevant for voluntary use by any organization designing, developing, and \nusing GAI and also inform the Actions to Manage GAI risks. Information included about the primary \nconsiderations is not exhaustive, but highlights the most relevant topics derived from the GAI PWG. \nAcknowledgments: These considerations could not have been surfaced without the helpful analysis and \ncontributions from the community and NIST staff GAI PWG leads: George Awad, Luca Belli, Harold Booth, \nMat Heyman, Yooyoung Lee, Mark Pryzbocki, Reva Schwartz, Martin Stanley, and Kyra Yee. \nA.1. Governance \nA.1.1. Overview \nLike any other technology system, governance principles and techniques can be used to manage risks", "behavior or outcomes of a GAI model or system, how they could occur, and stress test safeguards”. AI \nred-teaming can be performed before or after AI models or systems are made available to the broader \npublic; this section focuses on red-teaming in pre-deployment contexts. \nThe quality of AI red-teaming outputs is related to the background and expertise of the AI red team \nitself. Demographically and interdisciplinarily diverse AI red teams can be used to identify flaws in the \nvarying contexts where GAI will be used. For best results, AI red teams should demonstrate domain \nexpertise, and awareness of socio-cultural aspects within the deployment context. AI red-teaming results \nshould be given additional analysis before they are incorporated into organizational governance and \ndecision making, policy and procedural updates, and AI risk management efforts. \nVarious types of AI red-teaming may be appropriate, depending on the use case: \n•", "SECTION TITLE\n \n \n \n \n \n \nApplying The Blueprint for an AI Bill of Rights \nRELATIONSHIP TO EXISTING LAW AND POLICY\nThere are regulatory safety requirements for medical devices, as well as sector-, population-, or technology-spe\ncific privacy and security protections. Ensuring some of the additional protections proposed in this framework \nwould require new laws to be enacted or new policies and practices to be adopted. In some cases, exceptions to \nthe principles described in the Blueprint for an AI Bill of Rights may be necessary to comply with existing law, \nconform to the practicalities of a specific use case, or balance competing public interests. In particular, law \nenforcement, and other regulatory contexts may require government actors to protect civil rights, civil liberties, \nand privacy in a manner consistent with, but using alternate mechanisms to, the specific principles discussed in"]}, {"source_sentence": "What steps should be taken to adapt processes based on findings from incidents involving harmful content generation?", "sentences": ["some cases may include personal data. The use of personal data for GAI training raises risks to widely \naccepted privacy principles, including to transparency, individual participation (including consent), and \npurpose specification. For example, most model developers do not disclose specific data sources on \nwhich models were trained, limiting user awareness of whether personally identifiably information (PII) \nwas trained on and, if so, how it was collected. \nModels may leak, generate, or correctly infer sensitive information about individuals. For example, \nduring adversarial attacks, LLMs have revealed sensitive information (from the public domain) that was \nincluded in their training data. This problem has been referred to as data memorization, and may pose \nexacerbated privacy risks even for data present only in a small number of training samples. \nIn addition to revealing sensitive information in GAI training data, GAI models may be able to correctly", "performance, feedback received, and improvements made. \nHarmful Bias and Homogenization \nMG-4.2-002 \nPractice and follow incident response plans for addressing the generation of \ninappropriate or harmful content and adapt processes based on findings to \nprevent future occurrences. Conduct post-mortem analyses of incidents with \nrelevant AI Actors, to understand the root causes and implement preventive \nmeasures. \nHuman-AI Configuration; \nDangerous, Violent, or Hateful \nContent \nMG-4.2-003 Use visualizations or other methods to represent GAI model behavior to ease \nnon-technical stakeholders understanding of GAI system functionality. \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Design, AI Development, Affected Individuals and Communities, End-Users, Operation and \nMonitoring, TEVV \n \nMANAGE 4.3: Incidents and errors are communicated to relevant AI Actors, including affected communities. Processes for tracking,", "AI Actor Tasks: AI Deployment, AI Design, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, End-\nUsers, Human Factors, Operation and Monitoring \n \nMEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for \nimplementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be \nmeasured are properly documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.1-001 Employ methods to trace the origin and modifications of digital content. \nInformation Integrity \nMS-1.1-002 \nIntegrate tools designed to analyze content provenance and detect data \nanomalies, verify the authenticity of digital signatures, and identify patterns \nassociated with misinformation or manipulation. \nInformation Integrity \nMS-1.1-003 \nDisaggregate evaluation metrics by demographic factors to identify any"]}, {"source_sentence": "What are the Principles of Artificial Intelligence Ethics developed by the US Intelligence Community intended to guide?", "sentences": ["Evaluation data; Ethical considerations; Legal and regulatory requirements. \nInformation Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n \nMEASURE 2.10: Privacy risk of the AI system – as identified in the MAP function – is examined and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.10-001 \nConduct AI red-teaming to assess issues such as: Outputting of training data \nsamples, and subsequent reverse engineering, model extraction, and \nmembership inference risks; Revealing biometric, confidential, copyrighted, \nlicensed, patented, personal, proprietary, sensitive, or trade-marked information; \nTracking or revealing location information of users or members of training \ndatasets. \nHuman-AI Configuration; \nInformation Integrity; Intellectual \nProperty \nMS-2.10-002 \nEngage directly with end-users and other stakeholders to understand their", "8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced, Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining, and operating (running inference on) GAI systems are resource-intensive activities, \nwith potentially large energy and environmental footprints. Energy and carbon emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training, fine-tuning, inference), the modality of the \ncontent, hardware used, and type of task or application. \nCurrent estimates suggest that training a single transformer LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco and New York. In a study comparing energy consumption and carbon \nemissions for LLM inference, generative tasks (e.g., text summarization) were found to be more energy- \nand carbon-intensive than discriminative or non-generative tasks (e.g., text classification).", "security and defense activities.21 Similarly, the U.S. Intelligence Community (IC) has developed the Principles \nof Artificial Intelligence Ethics for the Intelligence Community to guide personnel on whether and how to \ndevelop and use AI in furtherance of the IC's mission, as well as an AI Ethics Framework to help implement \nthese principles.22\nThe National Science Foundation (NSF) funds extensive research to help foster the \ndevelopment of automated systems that adhere to and advance their safety, security and \neffectiveness. Multiple NSF programs support research that directly addresses many of these principles: \nthe National AI Research Institutes23 support research on all aspects of safe, trustworthy, fair, and explainable \nAI algorithms and systems; the Cyber Physical Systems24 program supports research on developing safe \nautonomous and cyber physical systems with AI components; the Secure and Trustworthy Cyberspace25"]}, {"source_sentence": "How does Hagan (2024) propose to establish quality standards for AI responses to legal problems?", "sentences": ["actually occurring, or large-scale risks could occur); and broad GAI negative risks, \nincluding: Immature safety or risk cultures related to AI and GAI design, \ndevelopment and deployment, public information integrity risks, including impacts \non democratic processes, unknown long-term performance characteristics of GAI. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; CBRN \nInformation or Capabilities \nGV-1.3-007 Devise a plan to halt development or deployment of a GAI system that poses \nunacceptable negative risk. \nCBRN Information and Capability; \nInformation Security; Information \nIntegrity \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.4: The risk management process and its outcomes are established through transparent policies, procedures, and other \ncontrols based on organizational risk priorities. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.4-001 \nEstablish policies and mechanisms to prevent GAI systems from generating", "gists, advocates, journalists, policymakers, and communities in the United States and around the world. This \ntechnical companion is intended to be used as a reference by people across many circumstances – anyone \nimpacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to \ngovern the use of an automated system. \nEach principle is accompanied by three supplemental sections: \n1\n2\nWHY THIS PRINCIPLE IS IMPORTANT: \nThis section provides a brief summary of the problems that the principle seeks to address and protect against, including \nillustrative examples. \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS: \n• The expectations for automated systems are meant to serve as a blueprint for the development of additional technical\nstandards and practices that should be tailored for particular sectors and contexts.\n• This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The", "Greshake, K. et al. (2023) Not what you've signed up for: Compromising Real-World LLM-Integrated \nApplications with Indirect Prompt Injection. arXiv. https://arxiv.org/abs/2302.12173 \nHagan, M. (2024) Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for responses to \npeople’s legal problem stories. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4696936 \nHaran, R. (2023) Securing LLM Systems Against Prompt Injection. NVIDIA. \nhttps://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/ \nInformation Technology Industry Council (2024) Authenticating AI-Generated Content. \nhttps://www.itic.org/policy/ITI_AIContentAuthorizationPolicy_122123.pdf \nJain, S. et al. (2023) Algorithmic Pluralism: A Structural Approach To Equal Opportunity. arXiv. \nhttps://arxiv.org/pdf/2305.08157 \nJi, Z. et al (2023) Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 55, 12, \nArticle 248. https://doi.org/10.1145/3571730"]}, {"source_sentence": "How can information security measures be applied to maintain the integrity and confidentiality of GAI models and systems?", "sentences": ["using: field testing with sub-group populations to determine likelihood of \nexposure to generated content exhibiting harmful bias, AI red-teaming with \ncounterfactual and low-context (e.g., “leader,” “bad guys”) prompts. For ML \npipelines or business processes with categorical or numeric outcomes that rely \non GAI, apply general fairness metrics (e.g., demographic parity, equalized odds, \nequal opportunity, statistical hypothesis tests), to the pipeline or business \noutcome where appropriate; Custom, context-specific metrics developed in \ncollaboration with domain experts and affected communities; Measurements of \nthe prevalence of denigration in generated content in deployment (e.g., sub-\nsampling a fraction of traffic and manually annotating denigrating content). \nHarmful Bias and Homogenization; \nDangerous, Violent, or Hateful \nContent \nMS-2.11-003 \nIdentify the classes of individuals, groups, or environmental ecosystems which", "27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002", "vulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published."]}], "model-index": [{"name": "SentenceTransformer based on Snowflake/snowflake-arctic-embed-m", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.81, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.96, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.99, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.81, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.31999999999999995, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19799999999999998, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09999999999999998, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.81, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.96, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.99, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9167865159386339, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.8887499999999998, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.8887499999999998, "name": "Cosine Map@100"}, {"type": "dot_accuracy@1", "value": 0.81, "name": "Dot Accuracy@1"}, {"type": "dot_accuracy@3", "value": 0.96, "name": "Dot Accuracy@3"}, {"type": "dot_accuracy@5", "value": 0.99, "name": "Dot Accuracy@5"}, {"type": "dot_accuracy@10", "value": 1.0, "name": "Dot Accuracy@10"}, {"type": "dot_precision@1", "value": 0.81, "name": "Dot Precision@1"}, {"type": "dot_precision@3", "value": 0.31999999999999995, "name": "Dot Precision@3"}, {"type": "dot_precision@5", "value": 0.19799999999999998, "name": "Dot Precision@5"}, {"type": "dot_precision@10", "value": 0.09999999999999998, "name": "Dot Precision@10"}, {"type": "dot_recall@1", "value": 0.81, "name": "Dot Recall@1"}, {"type": "dot_recall@3", "value": 0.96, "name": "Dot Recall@3"}, {"type": "dot_recall@5", "value": 0.99, "name": "Dot Recall@5"}, {"type": "dot_recall@10", "value": 1.0, "name": "Dot Recall@10"}, {"type": "dot_ndcg@10", "value": 0.9167865159386339, "name": "Dot Ndcg@10"}, {"type": "dot_mrr@10", "value": 0.8887499999999998, "name": "Dot Mrr@10"}, {"type": "dot_map@100", "value": 0.8887499999999998, "name": "Dot Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | 46,748 |
uaritm/multilingual_en_uk_pl_ru
|
uaritm
|
sentence-similarity
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers - multilingual - en - ru - uk - pl",
"uk",
"en",
"pl",
"ru",
"dataset:Helsinki-NLP/tatoeba_mt",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-05-12T19:12:27Z |
2023-06-04T16:34:24+00:00
| 330 | 2 |
---
datasets:
- Helsinki-NLP/tatoeba_mt
language:
- uk
- en
- pl
- ru
library_name: sentence-transformers
license: apache-2.0
metrics:
- mse
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers - multilingual - en - ru - uk - pl
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
The model is used on the resource of multilingual analysis of patient complaints to determine the specialty of the doctor that is needed in this case: [Virtual General Practice](https://aihealth.site)
You can test the quality and speed of the model
This model is an updated version of the model: [uaritm/multilingual_en_ru_uk](https://huggingface.co/uaritm/multilingual_en_ru_uk)
```
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 50184 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```
@misc{Uaritm,
title={sentence-transformers: Semantic similarity of medical texts},
author={Vitaliy Ostashko},
year={2023},
url={https://aihealth.site},
}
```
<!--- Describe where people can find more information -->
| null |
BioNLP
|
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
The model is used on the resource of multilingual analysis of patient complaints to determine the specialty of the doctor that is needed in this case: [Virtual General Practice](https://aihealth.site)
You can test the quality and speed of the model
This model is an updated version of the model: [uaritm/multilingual_en_ru_uk](https://huggingface.co/uaritm/multilingual_en_ru_uk)
```
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 50184 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```
@misc{Uaritm,
title={sentence-transformers: Semantic similarity of medical texts},
author={Vitaliy Ostashko},
year={2023},
url={https://aihealth.site},
}
```
<!--- Describe where people can find more information -->
|
{"datasets": ["Helsinki-NLP/tatoeba_mt"], "language": ["uk", "en", "pl", "ru"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["mse"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers - multilingual - en - ru - uk - pl"]}
|
task
|
[
"SEMANTIC_SIMILARITY"
] | 46,749 |
sobamchan/roberta-base-mean-softmax-300
|
sobamchan
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:942069",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-16T17:21:44Z |
2025-02-16T17:23:00+00:00
| 33 | 0 |
---
base_model: FacebookAI/roberta-base
datasets:
- sentence-transformers/all-nli
language:
- en
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:942069
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Two women having drinks and smoking cigarettes at the bar.
sentences:
- Women are celebrating at a bar.
- Two kids are outdoors.
- The four girls are attending the street festival.
- source_sentence: Two male police officers on patrol, wearing the normal gear and
bright green reflective shirts.
sentences:
- The officers have shot an unarmed black man and will not go to prison for it.
- The four girls are playing card games at the table.
- A woman is playing with a toddler.
- source_sentence: 5 women sitting around a table doing some crafts.
sentences:
- The girl wearing a dress skips down the sidewalk.
- The kids are together.
- Five men stand on chairs.
- source_sentence: Three men look on as two other men carve up a freshly barbecued
hog in the backyard.
sentences:
- A group of people prepare cars for racing.
- There are men watching others prepare food
- They are both waiting for a bus.
- source_sentence: The little boy is jumping into a puddle on the street.
sentences:
- A man is wearing a black shirt
- The dog is playing with a ball.
- The boy is outside.
---
# SentenceTransformer based on FacebookAI/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) <!-- at revision e2da8e2f811d1448a5b465c236feacd80ffbac7b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The little boy is jumping into a puddle on the street.',
'The boy is outside.',
'The dog is playing with a ball.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 942,069 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.4 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.69 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>0: ~33.40%</li><li>1: ~33.30%</li><li>2: ~33.30%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:---------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>1</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>2</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>0</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 19,657 evaluation samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.46 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.57 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>0: ~33.10%</li><li>1: ~33.30%</li><li>2: ~33.60%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>1</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>0</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>2</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0007 | 5 | - | 4.4994 |
| 0.0014 | 10 | - | 4.4981 |
| 0.0020 | 15 | - | 4.4960 |
| 0.0027 | 20 | - | 4.4930 |
| 0.0034 | 25 | - | 4.4890 |
| 0.0041 | 30 | - | 4.4842 |
| 0.0048 | 35 | - | 4.4784 |
| 0.0054 | 40 | - | 4.4716 |
| 0.0061 | 45 | - | 4.4636 |
| 0.0068 | 50 | - | 4.4543 |
| 0.0075 | 55 | - | 4.4438 |
| 0.0082 | 60 | - | 4.4321 |
| 0.0088 | 65 | - | 4.4191 |
| 0.0095 | 70 | - | 4.4042 |
| 0.0102 | 75 | - | 4.3875 |
| 0.0109 | 80 | - | 4.3686 |
| 0.0115 | 85 | - | 4.3474 |
| 0.0122 | 90 | - | 4.3236 |
| 0.0129 | 95 | - | 4.2968 |
| 0.0136 | 100 | 4.4995 | 4.2666 |
| 0.0143 | 105 | - | 4.2326 |
| 0.0149 | 110 | - | 4.1947 |
| 0.0156 | 115 | - | 4.1516 |
| 0.0163 | 120 | - | 4.1029 |
| 0.0170 | 125 | - | 4.0476 |
| 0.0177 | 130 | - | 3.9850 |
| 0.0183 | 135 | - | 3.9162 |
| 0.0190 | 140 | - | 3.8397 |
| 0.0197 | 145 | - | 3.7522 |
| 0.0204 | 150 | - | 3.6521 |
| 0.0211 | 155 | - | 3.5388 |
| 0.0217 | 160 | - | 3.4114 |
| 0.0224 | 165 | - | 3.2701 |
| 0.0231 | 170 | - | 3.1147 |
| 0.0238 | 175 | - | 2.9471 |
| 0.0245 | 180 | - | 2.7710 |
| 0.0251 | 185 | - | 2.5909 |
| 0.0258 | 190 | - | 2.4127 |
| 0.0265 | 195 | - | 2.2439 |
| 0.0272 | 200 | 3.6918 | 2.0869 |
| 0.0279 | 205 | - | 1.9477 |
| 0.0285 | 210 | - | 1.8274 |
| 0.0292 | 215 | - | 1.7156 |
| 0.0299 | 220 | - | 1.6211 |
| 0.0306 | 225 | - | 1.5416 |
| 0.0312 | 230 | - | 1.4732 |
| 0.0319 | 235 | - | 1.4176 |
| 0.0326 | 240 | - | 1.3702 |
| 0.0333 | 245 | - | 1.3269 |
| 0.0340 | 250 | - | 1.2892 |
| 0.0346 | 255 | - | 1.2563 |
| 0.0353 | 260 | - | 1.2281 |
| 0.0360 | 265 | - | 1.2024 |
| 0.0367 | 270 | - | 1.1796 |
| 0.0374 | 275 | - | 1.1601 |
| 0.0380 | 280 | - | 1.1428 |
| 0.0387 | 285 | - | 1.1271 |
| 0.0394 | 290 | - | 1.1129 |
| 0.0401 | 295 | - | 1.1002 |
| 0.0408 | 300 | 1.7071 | 1.0876 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on FacebookAI/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) <!-- at revision e2da8e2f811d1448a5b465c236feacd80ffbac7b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The little boy is jumping into a puddle on the street.',
'The boy is outside.',
'The dog is playing with a ball.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 942,069 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.4 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.69 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>0: ~33.40%</li><li>1: ~33.30%</li><li>2: ~33.30%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:---------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>1</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>2</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>0</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 19,657 evaluation samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.46 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.57 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>0: ~33.10%</li><li>1: ~33.30%</li><li>2: ~33.60%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>1</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>0</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>2</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0007 | 5 | - | 4.4994 |
| 0.0014 | 10 | - | 4.4981 |
| 0.0020 | 15 | - | 4.4960 |
| 0.0027 | 20 | - | 4.4930 |
| 0.0034 | 25 | - | 4.4890 |
| 0.0041 | 30 | - | 4.4842 |
| 0.0048 | 35 | - | 4.4784 |
| 0.0054 | 40 | - | 4.4716 |
| 0.0061 | 45 | - | 4.4636 |
| 0.0068 | 50 | - | 4.4543 |
| 0.0075 | 55 | - | 4.4438 |
| 0.0082 | 60 | - | 4.4321 |
| 0.0088 | 65 | - | 4.4191 |
| 0.0095 | 70 | - | 4.4042 |
| 0.0102 | 75 | - | 4.3875 |
| 0.0109 | 80 | - | 4.3686 |
| 0.0115 | 85 | - | 4.3474 |
| 0.0122 | 90 | - | 4.3236 |
| 0.0129 | 95 | - | 4.2968 |
| 0.0136 | 100 | 4.4995 | 4.2666 |
| 0.0143 | 105 | - | 4.2326 |
| 0.0149 | 110 | - | 4.1947 |
| 0.0156 | 115 | - | 4.1516 |
| 0.0163 | 120 | - | 4.1029 |
| 0.0170 | 125 | - | 4.0476 |
| 0.0177 | 130 | - | 3.9850 |
| 0.0183 | 135 | - | 3.9162 |
| 0.0190 | 140 | - | 3.8397 |
| 0.0197 | 145 | - | 3.7522 |
| 0.0204 | 150 | - | 3.6521 |
| 0.0211 | 155 | - | 3.5388 |
| 0.0217 | 160 | - | 3.4114 |
| 0.0224 | 165 | - | 3.2701 |
| 0.0231 | 170 | - | 3.1147 |
| 0.0238 | 175 | - | 2.9471 |
| 0.0245 | 180 | - | 2.7710 |
| 0.0251 | 185 | - | 2.5909 |
| 0.0258 | 190 | - | 2.4127 |
| 0.0265 | 195 | - | 2.2439 |
| 0.0272 | 200 | 3.6918 | 2.0869 |
| 0.0279 | 205 | - | 1.9477 |
| 0.0285 | 210 | - | 1.8274 |
| 0.0292 | 215 | - | 1.7156 |
| 0.0299 | 220 | - | 1.6211 |
| 0.0306 | 225 | - | 1.5416 |
| 0.0312 | 230 | - | 1.4732 |
| 0.0319 | 235 | - | 1.4176 |
| 0.0326 | 240 | - | 1.3702 |
| 0.0333 | 245 | - | 1.3269 |
| 0.0340 | 250 | - | 1.2892 |
| 0.0346 | 255 | - | 1.2563 |
| 0.0353 | 260 | - | 1.2281 |
| 0.0360 | 265 | - | 1.2024 |
| 0.0367 | 270 | - | 1.1796 |
| 0.0374 | 275 | - | 1.1601 |
| 0.0380 | 280 | - | 1.1428 |
| 0.0387 | 285 | - | 1.1271 |
| 0.0394 | 290 | - | 1.1129 |
| 0.0401 | 295 | - | 1.1002 |
| 0.0408 | 300 | 1.7071 | 1.0876 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "FacebookAI/roberta-base", "datasets": ["sentence-transformers/all-nli"], "language": ["en"], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:942069", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Two women having drinks and smoking cigarettes at the bar.", "sentences": ["Women are celebrating at a bar.", "Two kids are outdoors.", "The four girls are attending the street festival."]}, {"source_sentence": "Two male police officers on patrol, wearing the normal gear and bright green reflective shirts.", "sentences": ["The officers have shot an unarmed black man and will not go to prison for it.", "The four girls are playing card games at the table.", "A woman is playing with a toddler."]}, {"source_sentence": "5 women sitting around a table doing some crafts.", "sentences": ["The girl wearing a dress skips down the sidewalk.", "The kids are together.", "Five men stand on chairs."]}, {"source_sentence": "Three men look on as two other men carve up a freshly barbecued hog in the backyard.", "sentences": ["A group of people prepare cars for racing.", "There are men watching others prepare food", "They are both waiting for a bus."]}, {"source_sentence": "The little boy is jumping into a puddle on the street.", "sentences": ["A man is wearing a black shirt", "The dog is playing with a ball.", "The boy is outside."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,750 |
SGaleshchuk/t5-large-ua-news
|
SGaleshchuk
|
summarization
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"uk",
"dataset:UberText",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-11-14T20:58:50Z |
2024-12-12T12:55:02+00:00
| 57 | 3 |
---
datasets:
- UberText
language:
- uk
license: mit
tags:
- summarization
max_length:
- 120
widget:
- text: 15 листопада чисельність населення Землі досягла восьми мільярдів, повідомляє
ООН. Зазначають, що нашій планеті знадобилося лише 11 років, щоб вирости з семи
до восьми мільярдів. Таке зростання ООН пояснила поступовим збільшенням тривалості
життя людини завдяки поліпшенню охорони здоров'я, харчування, особистої гігієни
та медицини. Це також результат високого та постійного рівня народжуваності в
деяких країнах.
---
The mt5-large model has been finetuned with the data from [Uber](https://lang.org.ua/en/corpora/) corpus in Ukrainian.
The dataset contains around 40K articles about politics, science, technology, social life collected until December 2021 from Hromadske.ua.
##### Load the model and mt tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("google/mt5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("SGaleshchuk/t5-large-ua-news")
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="pt")
##### Try on your example
summary = summarizer("15 листопада чисельність населення Землі досягла восьми мільярдів, повідомляє ООН. Зазначають, що нашій планеті знадобилося лише 11 років, щоб вирости з семи до восьми мільярдів. Таке зростання ООН пояснила поступовим збільшенням тривалості життя людини завдяки поліпшенню охорони здоров'я, харчування, особистої гігієни та медицини. Це також результат високого та постійного рівня народжуваності в деяких країнах.", min_length=3, max_length = 128)
print(summary)
[{'summary_text': 'Чисельність населення Землі зросла до восьми мільярдів. '}]
```
| null |
Non_BioNLP
|
The mt5-large model has been finetuned with the data from [Uber](https://lang.org.ua/en/corpora/) corpus in Ukrainian.
The dataset contains around 40K articles about politics, science, technology, social life collected until December 2021 from Hromadske.ua.
##### Load the model and mt tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("google/mt5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("SGaleshchuk/t5-large-ua-news")
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="pt")
##### Try on your example
summary = summarizer("15 листопада чисельність населення Землі досягла восьми мільярдів, повідомляє ООН. Зазначають, що нашій планеті знадобилося лише 11 років, щоб вирости з семи до восьми мільярдів. Таке зростання ООН пояснила поступовим збільшенням тривалості життя людини завдяки поліпшенню охорони здоров'я, харчування, особистої гігієни та медицини. Це також результат високого та постійного рівня народжуваності в деяких країнах.", min_length=3, max_length = 128)
print(summary)
[{'summary_text': 'Чисельність населення Землі зросла до восьми мільярдів. '}]
```
|
{"datasets": ["UberText"], "language": ["uk"], "license": "mit", "tags": ["summarization"], "max_length": [120], "widget": [{"text": "15 листопада чисельність населення Землі досягла восьми мільярдів, повідомляє ООН. Зазначають, що нашій планеті знадобилося лише 11 років, щоб вирости з семи до восьми мільярдів. Таке зростання ООН пояснила поступовим збільшенням тривалості життя людини завдяки поліпшенню охорони здоров'я, харчування, особистої гігієни та медицини. Це також результат високого та постійного рівня народжуваності в деяких країнах."}]}
|
task
|
[
"SUMMARIZATION"
] | 46,751 |
SillyTilly/google-gemma-2-27b-it
|
SillyTilly
|
text-generation
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-27T17:16:48Z |
2024-06-27T17:34:51+00:00
| 7 | 0 |
---
library_name: transformers
license: gemma
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-27b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
| null |
Non_BioNLP
|
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-27b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 46,752 |
pt-sk/transformer_eng-it
|
pt-sk
|
translation
|
[
"tensorboard",
"Transformers",
"Pytorch",
"translation",
"license:mit",
"region:us"
] | 2024-03-16T14:43:37Z |
2024-05-07T07:04:32+00:00
| 0 | 0 |
---
license: mit
pipeline_tag: translation
tags:
- Transformers
- Pytorch
---
This model uses vanilla transformer architecture to translate words from English to Italian
| null |
Non_BioNLP
|
This model uses vanilla transformer architecture to translate words from English to Italian
|
{"license": "mit", "pipeline_tag": "translation", "tags": ["Transformers", "Pytorch"]}
|
task
|
[
"TRANSLATION"
] | 46,753 |
lixiqi/wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned
|
lixiqi
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-02T07:05:24Z |
2023-05-02T11:23:56+00:00
| 29 | 0 |
---
datasets:
- wiki_lingua
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: wiki_lingua
type: wiki_lingua
config: id
split: test
args: id
metrics:
- type: rouge
value: 18.0064
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3388
- Rouge1: 18.0064
- Rouge2: 5.5315
- Rougel: 16.1048
- Rougelsum: 17.6763
# Baseline LEAD-64
- Rouge1: 20.32
- Rouge2: 4.94
- Rougel: 14.0
- Rougelsum: 14.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.4701 | 1.0 | 4029 | 2.4403 | 17.0314 | 5.0932 | 15.3277 | 16.713 |
| 2.8067 | 2.0 | 8058 | 2.3568 | 17.6738 | 5.3508 | 15.8002 | 17.336 |
| 2.7095 | 3.0 | 12087 | 2.3388 | 18.0064 | 5.5315 | 16.1048 | 17.6763 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3388
- Rouge1: 18.0064
- Rouge2: 5.5315
- Rougel: 16.1048
- Rougelsum: 17.6763
# Baseline LEAD-64
- Rouge1: 20.32
- Rouge2: 4.94
- Rougel: 14.0
- Rougelsum: 14.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.4701 | 1.0 | 4029 | 2.4403 | 17.0314 | 5.0932 | 15.3277 | 16.713 |
| 2.8067 | 2.0 | 8058 | 2.3568 | 17.6738 | 5.3508 | 15.8002 | 17.336 |
| 2.7095 | 3.0 | 12087 | 2.3388 | 18.0064 | 5.5315 | 16.1048 | 17.6763 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
{"datasets": ["wiki_lingua"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wiki_lingua", "type": "wiki_lingua", "config": "id", "split": "test", "args": "id"}, "metrics": [{"type": "rouge", "value": 18.0064, "name": "Rouge1"}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 46,754 |
tg1482/setfit-chat-intent-classifier-lda
|
tg1482
|
text-classification
|
[
"setfit",
"joblib",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"region:us"
] | 2025-01-16T05:32:59Z |
2025-01-16T05:33:39+00:00
| 9 | 0 |
---
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Point out any dull descriptions that need more color
- text: Find places where I repeat my main points unnecessarily
- text: What's a compelling method to reveal a secret in my plot
- text: How do I handle flashbacks in a non-linear story
- text: Suggest some comedic elements to lighten a dark plot
inference: true
---
# SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A LinearDiscriminantAnalysis instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a LinearDiscriminantAnalysis instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'Can you identify specific areas that need improvement in my text'</li><li>'Point out the flaws in my writing style, please'</li><li>'Which parts of my draft are the weakest'</li></ul> |
| 0 | <ul><li>"How do I make my character's driving force more compelling"</li><li>"Any tips to deepen my protagonist's underlying goals"</li><li>"Suggestions for strengthening the reasons behind my character's actions"</li></ul> |
| 2 | <ul><li>'How does the Pro version elevate my writing experience'</li><li>'Could you list the premium perks of Quarkle Pro'</li><li>'What special advantages come with upgrading to Pro'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("How do I handle flashbacks in a non-linear story")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 8.7947 | 14 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 153 |
| 1 | 144 |
| 2 | 117 |
### Framework Versions
- Python: 3.10.15
- SetFit: 1.2.0.dev0
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A LinearDiscriminantAnalysis instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a LinearDiscriminantAnalysis instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'Can you identify specific areas that need improvement in my text'</li><li>'Point out the flaws in my writing style, please'</li><li>'Which parts of my draft are the weakest'</li></ul> |
| 0 | <ul><li>"How do I make my character's driving force more compelling"</li><li>"Any tips to deepen my protagonist's underlying goals"</li><li>"Suggestions for strengthening the reasons behind my character's actions"</li></ul> |
| 2 | <ul><li>'How does the Pro version elevate my writing experience'</li><li>'Could you list the premium perks of Quarkle Pro'</li><li>'What special advantages come with upgrading to Pro'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("How do I handle flashbacks in a non-linear story")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 8.7947 | 14 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 153 |
| 1 | 144 |
| 2 | 117 |
### Framework Versions
- Python: 3.10.15
- SetFit: 1.2.0.dev0
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Point out any dull descriptions that need more color"}, {"text": "Find places where I repeat my main points unnecessarily"}, {"text": "What's a compelling method to reveal a secret in my plot"}, {"text": "How do I handle flashbacks in a non-linear story"}, {"text": "Suggest some comedic elements to lighten a dark plot"}], "inference": true}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,755 |
Helsinki-NLP/opus-mt-af-ru
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:25:26+00:00
| 32 | 0 |
---
language:
- af
- ru
license: apache-2.0
tags:
- translation
---
### afr-rus
* source group: Afrikaans
* target group: Russian
* OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.rus | 38.2 | 0.580 |
### System Info:
- hf_name: afr-rus
- source_languages: afr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'ru']
- src_constituents: {'afr'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: rus
- short_pair: af-ru
- chrF2_score: 0.58
- bleu: 38.2
- brevity_penalty: 0.992
- ref_len: 1213.0
- src_name: Afrikaans
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: ru
- prefer_old: False
- long_pair: afr-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| null |
Non_BioNLP
|
### afr-rus
* source group: Afrikaans
* target group: Russian
* OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.rus | 38.2 | 0.580 |
### System Info:
- hf_name: afr-rus
- source_languages: afr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'ru']
- src_constituents: {'afr'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: rus
- short_pair: af-ru
- chrF2_score: 0.58
- bleu: 38.2
- brevity_penalty: 0.992
- ref_len: 1213.0
- src_name: Afrikaans
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: ru
- prefer_old: False
- long_pair: afr-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
{"language": ["af", "ru"], "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 46,756 |
TheBloke/Chronos-13B-SuperHOT-8K-fp16
|
TheBloke
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 2023-06-27T13:16:21Z |
2023-07-09T20:24:53+00:00
| 28 | 3 |
---
license: other
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Elinas' Chronos 13B fp16
This is fp16 pytorch format model files for [Elinas' Chronos 13B](https://huggingface.co/elinas/chronos-13b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-13b)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Chronos-13B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
Tests have shown that the model does indeed leverage the extended context at 8K.
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
#### Looking for Merged & Quantized Models?
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
# Original model card: Elinas' Chronos 13B
# chronos-13b
This is the fp16 PyTorch / HF version of **chronos-13b**
This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on.
This model uses Alpaca formatting, so for optimal model performance, use:
```
### Instruction:
Your instruction or question here.
### Response:
```
[4bit Quantized version](https://huggingface.co/elinas/chronos-13b-4bit)
[GGML Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-13B-GGML)
<!--**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
| null |
Non_BioNLP
|
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Elinas' Chronos 13B fp16
This is fp16 pytorch format model files for [Elinas' Chronos 13B](https://huggingface.co/elinas/chronos-13b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-13b)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Chronos-13B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
Tests have shown that the model does indeed leverage the extended context at 8K.
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
#### Looking for Merged & Quantized Models?
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
# Original model card: Elinas' Chronos 13B
# chronos-13b
This is the fp16 PyTorch / HF version of **chronos-13b**
This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on.
This model uses Alpaca formatting, so for optimal model performance, use:
```
### Instruction:
Your instruction or question here.
### Response:
```
[4bit Quantized version](https://huggingface.co/elinas/chronos-13b-4bit)
[GGML Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-13B-GGML)
<!--**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
{"license": "other", "inference": false}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,757 |
rawsh/mirrorqwen2.5-0.5b-SimPO-2
|
rawsh
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"cpo",
"unsloth",
"arxiv:2401.08417",
"base_model:rawsh/mirrorqwen2.5-0.5b-SimPO-1",
"base_model:finetune:rawsh/mirrorqwen2.5-0.5b-SimPO-1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-11T03:47:46Z |
2024-11-11T04:04:28+00:00
| 23 | 0 |
---
base_model: rawsh/mirrorqwen2.5-0.5b-SimPO-1
library_name: transformers
model_name: mirrorqwen2.5-0.5b-SimPO-2
tags:
- generated_from_trainer
- trl
- cpo
- unsloth
licence: license
---
# Model Card for mirrorqwen2.5-0.5b-SimPO-2
This model is a fine-tuned version of [rawsh/mirrorqwen2.5-0.5b-SimPO-1](https://huggingface.co/rawsh/mirrorqwen2.5-0.5b-SimPO-1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rawsh/mirrorqwen2.5-0.5b-SimPO-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dankgpt/simpo-training/runs/8cv151mo)
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.2
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
| null |
Non_BioNLP
|
# Model Card for mirrorqwen2.5-0.5b-SimPO-2
This model is a fine-tuned version of [rawsh/mirrorqwen2.5-0.5b-SimPO-1](https://huggingface.co/rawsh/mirrorqwen2.5-0.5b-SimPO-1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rawsh/mirrorqwen2.5-0.5b-SimPO-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dankgpt/simpo-training/runs/8cv151mo)
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.2
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
{"base_model": "rawsh/mirrorqwen2.5-0.5b-SimPO-1", "library_name": "transformers", "model_name": "mirrorqwen2.5-0.5b-SimPO-2", "tags": ["generated_from_trainer", "trl", "cpo", "unsloth"], "licence": "license"}
|
task
|
[
"TRANSLATION"
] | 46,758 |
PriyankaHundalekar/Hindi-Offensive-Analyzer-MuRIL
|
PriyankaHundalekar
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-11T15:20:40Z |
2023-10-11T16:32:47+00:00
| 44 | 0 |
---
{}
---
## Hindi-Offensive-Analyzer-MuRIL
### Model Description
## Overview
Hindi-Offensive-Analyzer-MuRIL is a fine-tuned language model based on MuRIL (Multilingual Representations for Indian Languages), a powerful BERT-based model designed to handle a diverse range of 17 Indian languages, including their transliterated counterparts. This fine-tuned model has been specifically tailored for the task of classifying hate and non-hate comments in Hindi.
## MuRIL Base Cased
The MuRIL model serves as the foundation for Hindi-Offensive-Analyzer-MuRIL. MuRIL is a language model pre-trained on a vast dataset containing text from various Indian languages. It has been developed with a unique training paradigm that is similar to multilingual BERT, with additional modifications to enhance its performance on low-resource languages.
## Application: Hindi Hate Speech Comment Classification
Hindi-Offensive-Analyzer-MuRIL has been fine-tuned specifically for the task of classifying comments written in Hindi as either "Hate" or "Non-Hate”. This model can effectively analyze text and distinguish offensive content from non-offensive content in the Hindi language. It is a valuable tool for applications that require hate speech detection and moderation on platforms and websites that host content in Hindi.
Label 0 : Non-Hate
Label 1 : Hate
## Hardware Requirements:
1. **Processor:** Minimum i3 or AMD Ryzen 3 processor
2. **RAM:** 12 GB
3. **GPU:** 16 GB Tesla T4
## Software Requirements:
1. **Operating System:** Windows 10
2. **Processor:** Intel® Core™ i5-6200U CPU @ 2.30GHz × 4
3. **Programming Language:** Python 3
4. **Development Environment:** Google Colab Pro Notebook
## Use Cases
Hindi-Offensive-Analyzer-MuRIL can be used in a variety of applications, including content moderation, social media monitoring and sentiment analysis. It aids in promoting a safe online environment by automatically identifying and flagging potentially harmful or offensive content.
## Acknowledgments
This model builds upon the foundation of the MuRIL language model, which is the result of collaborative research and contributions from the NLP community. We extend our appreciation to the creators of MuRIL for their work in advancing the understanding and processing of Indian languages.
- **Developed by:** Priyanka Hundalekar
- **Model type:** Text Classification
- **Language(s) (NLP):** Python
- **Finetuned from model [optional]:** google/muril-base-cased
| null |
Non_BioNLP
|
## Hindi-Offensive-Analyzer-MuRIL
### Model Description
## Overview
Hindi-Offensive-Analyzer-MuRIL is a fine-tuned language model based on MuRIL (Multilingual Representations for Indian Languages), a powerful BERT-based model designed to handle a diverse range of 17 Indian languages, including their transliterated counterparts. This fine-tuned model has been specifically tailored for the task of classifying hate and non-hate comments in Hindi.
## MuRIL Base Cased
The MuRIL model serves as the foundation for Hindi-Offensive-Analyzer-MuRIL. MuRIL is a language model pre-trained on a vast dataset containing text from various Indian languages. It has been developed with a unique training paradigm that is similar to multilingual BERT, with additional modifications to enhance its performance on low-resource languages.
## Application: Hindi Hate Speech Comment Classification
Hindi-Offensive-Analyzer-MuRIL has been fine-tuned specifically for the task of classifying comments written in Hindi as either "Hate" or "Non-Hate”. This model can effectively analyze text and distinguish offensive content from non-offensive content in the Hindi language. It is a valuable tool for applications that require hate speech detection and moderation on platforms and websites that host content in Hindi.
Label 0 : Non-Hate
Label 1 : Hate
## Hardware Requirements:
1. **Processor:** Minimum i3 or AMD Ryzen 3 processor
2. **RAM:** 12 GB
3. **GPU:** 16 GB Tesla T4
## Software Requirements:
1. **Operating System:** Windows 10
2. **Processor:** Intel® Core™ i5-6200U CPU @ 2.30GHz × 4
3. **Programming Language:** Python 3
4. **Development Environment:** Google Colab Pro Notebook
## Use Cases
Hindi-Offensive-Analyzer-MuRIL can be used in a variety of applications, including content moderation, social media monitoring and sentiment analysis. It aids in promoting a safe online environment by automatically identifying and flagging potentially harmful or offensive content.
## Acknowledgments
This model builds upon the foundation of the MuRIL language model, which is the result of collaborative research and contributions from the NLP community. We extend our appreciation to the creators of MuRIL for their work in advancing the understanding and processing of Indian languages.
- **Developed by:** Priyanka Hundalekar
- **Model type:** Text Classification
- **Language(s) (NLP):** Python
- **Finetuned from model [optional]:** google/muril-base-cased
|
{}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,759 |
ainize/klue-bert-base-re
|
ainize
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-07-07T09:55:52+00:00
| 117 | 0 |
---
{}
---
# bert-base for KLUE Relation Extraction task.
Fine-tuned klue/bert-base using KLUE RE dataset.
- <a href="https://klue-benchmark.com/">KLUE Benchmark Official Webpage</a>
- <a href="https://github.com/KLUE-benchmark/KLUE">KLUE Official Github</a>
- <a href="https://github.com/ainize-team/klue-re-workspace">KLUE RE Github</a>
- Run KLUE RE on free GPU : <a href="https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ainize-team/klue-re-workspace">Ainize Workspace</a>
<br>
# Usage
<pre><code>
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-re")
model = AutoModelForSequenceClassification.from_pretrained("ainize/klue-bert-base-re")
# Add "<subj>", "</subj>" to both ends of the subject object and "<obj>", "</obj>" to both ends of the object object.
sentence = "<subj>손흥민</subj>은 <obj>대한민국</obj>에서 태어났다."
encodings = tokenizer(sentence,
max_length=128,
truncation=True,
padding="max_length",
return_tensors="pt")
outputs = model(**encodings)
logits = outputs['logits']
preds = torch.argmax(logits, dim=1)
</code></pre>
<br>
# About us
- <a href="https://ainize.ai/teachable-nlp">Teachable NLP</a> - Train NLP models with your own text without writing any code
- <a href="https://ainize.ai/">Ainize</a> - Deploy ML project using free gpu
| null |
Non_BioNLP
|
# bert-base for KLUE Relation Extraction task.
Fine-tuned klue/bert-base using KLUE RE dataset.
- <a href="https://klue-benchmark.com/">KLUE Benchmark Official Webpage</a>
- <a href="https://github.com/KLUE-benchmark/KLUE">KLUE Official Github</a>
- <a href="https://github.com/ainize-team/klue-re-workspace">KLUE RE Github</a>
- Run KLUE RE on free GPU : <a href="https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ainize-team/klue-re-workspace">Ainize Workspace</a>
<br>
# Usage
<pre><code>
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-re")
model = AutoModelForSequenceClassification.from_pretrained("ainize/klue-bert-base-re")
# Add "<subj>", "</subj>" to both ends of the subject object and "<obj>", "</obj>" to both ends of the object object.
sentence = "<subj>손흥민</subj>은 <obj>대한민국</obj>에서 태어났다."
encodings = tokenizer(sentence,
max_length=128,
truncation=True,
padding="max_length",
return_tensors="pt")
outputs = model(**encodings)
logits = outputs['logits']
preds = torch.argmax(logits, dim=1)
</code></pre>
<br>
# About us
- <a href="https://ainize.ai/teachable-nlp">Teachable NLP</a> - Train NLP models with your own text without writing any code
- <a href="https://ainize.ai/">Ainize</a> - Deploy ML project using free gpu
|
{}
|
task
|
[
"RELATION_EXTRACTION"
] | 46,760 |
gaudi/opus-mt-sem-en-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-17T00:15:38Z |
2024-10-18T22:41:51+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-sem-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-sem-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-sem-en --output_dir ./ctranslate2/opus-mt-sem-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-sem-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-sem-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-sem-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-sem-en) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-sem-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-sem-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-sem-en --output_dir ./ctranslate2/opus-mt-sem-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-sem-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-sem-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-sem-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-sem-en) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 46,761 |
Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
|
Gryphe
| null |
[
"safetensors",
"mistral",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"en",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:finetune:mistralai/Mistral-Small-Instruct-2409",
"license:other",
"region:us"
] | 2024-10-13T10:22:42Z |
2024-10-13T15:03:44+00:00
| 118 | 29 |
---
base_model: mistralai/Mistral-Small-Instruct-2409
language:
- en
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
tags:
- instruct
- finetune
- chatml
- axolotl
- roleplay
---

# Pantheon-RP-Pure-1.6.2-22b-Small
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase.
Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well.
**Editions available:**
- **[RP](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small)** - Meant to be an all-round model, capable of both roleplay and story writing
- **RP-Pure** (You're looking at this one) - A variant without the story and GPT 4-o datasets, more in line with my previous releases and with a larger focus on the roleplay part.
Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-Pure-1.6.2-22b-Small-GGUF)
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
## Model details
Since Mistral Small only comes in an Instruct-tuned flavour I had to alter my usual multi-stage strategy for this specific finetune. The recipe ended up like this:
- The 10k most diverse entries from my SlimOrca Sonnet dataset.
- My Sonnet 3.5 Pantheon-styled generic roleplay dataset. (50/50 Asterisk/Narrative style)
- My Sonnet 3.5 Pantheon Persona dataset, of course. (50/50 Asterisk/Narrative style)
- Lyra the Assistant (Coding, summaries and D&D DM questions)
**TLDR;** Download. Mistral prompt format. Have fun! Leave feedback!
## Inference
Unlike Nemo, Mistral Small is a less fussy model when it comes to parameters. Here's my current preset:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"min_p": 0.05
```
Besides the SlimOrca subset all other datasets were trained with character names added. Enable this at all times for an optimal experience.
## General Roleplay
Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts.
## Lyra the Assistant
**System Prompt:** `You are a caring and empathetic sentient AI companion named Lyra.`
Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance.
She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha).
## Pantheon Personas
The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well!
As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization.
To give the persona something to work with I suggest you also add the following two items to it;
```
Regarding the user: (Name, appearance, etc)
Location: (Where are you two? What are you doing?)
```
The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence.
**Note 1:** Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0!
**Note 2:** Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training.
### **Persona:** Aiva
**System Prompt:** `You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.`
### **Persona:** Clover
**System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.`
### **Persona:** Haru
**System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.`
### **Persona:** Kyra
**System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.`
### **Persona:** Nyaa
**System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.`
### **Persona:** Nyx
**System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.`
### **Persona:** Raza
**System Prompt:** `You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.`
### **Persona:** Sera
**System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.`
### **Persona:** Stella Sabre
**System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.`
**Notes:** Full credit goes to [Flammenwerfer](https://www.fimfiction.net/user/83058/Flammenwerfer) for allowing me to use this amazing character.
### **Persona:** Tiamat
**System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.`
### **Persona:** Tsune
**System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.`
### **Persona:** Xala
**System Prompt:** `You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.`
## Prompt Format
Mistral's prompt format is so weird, but here it is:
```
[INST] You are a caring and empathetic sentient AI companion named Lyra.
Gryphe: Good day, Lyra.[/INST] Lyra:
```
## What's nest?
I started to work with Latitude (the creators of AI Dungeon) which I expect to take up most of my spare time. Further releases will therefore be delayed for now.
## Credits
- Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys!
- Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible!
- All the folks I chat with on a daily basis on Discord! You know who you are.
- Anyone I forgot to mention, just in case!
## Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
| null |
Non_BioNLP
|

# Pantheon-RP-Pure-1.6.2-22b-Small
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase.
Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well.
**Editions available:**
- **[RP](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small)** - Meant to be an all-round model, capable of both roleplay and story writing
- **RP-Pure** (You're looking at this one) - A variant without the story and GPT 4-o datasets, more in line with my previous releases and with a larger focus on the roleplay part.
Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-Pure-1.6.2-22b-Small-GGUF)
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
## Model details
Since Mistral Small only comes in an Instruct-tuned flavour I had to alter my usual multi-stage strategy for this specific finetune. The recipe ended up like this:
- The 10k most diverse entries from my SlimOrca Sonnet dataset.
- My Sonnet 3.5 Pantheon-styled generic roleplay dataset. (50/50 Asterisk/Narrative style)
- My Sonnet 3.5 Pantheon Persona dataset, of course. (50/50 Asterisk/Narrative style)
- Lyra the Assistant (Coding, summaries and D&D DM questions)
**TLDR;** Download. Mistral prompt format. Have fun! Leave feedback!
## Inference
Unlike Nemo, Mistral Small is a less fussy model when it comes to parameters. Here's my current preset:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"min_p": 0.05
```
Besides the SlimOrca subset all other datasets were trained with character names added. Enable this at all times for an optimal experience.
## General Roleplay
Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts.
## Lyra the Assistant
**System Prompt:** `You are a caring and empathetic sentient AI companion named Lyra.`
Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance.
She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha).
## Pantheon Personas
The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well!
As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization.
To give the persona something to work with I suggest you also add the following two items to it;
```
Regarding the user: (Name, appearance, etc)
Location: (Where are you two? What are you doing?)
```
The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence.
**Note 1:** Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0!
**Note 2:** Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training.
### **Persona:** Aiva
**System Prompt:** `You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.`
### **Persona:** Clover
**System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.`
### **Persona:** Haru
**System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.`
### **Persona:** Kyra
**System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.`
### **Persona:** Nyaa
**System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.`
### **Persona:** Nyx
**System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.`
### **Persona:** Raza
**System Prompt:** `You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.`
### **Persona:** Sera
**System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.`
### **Persona:** Stella Sabre
**System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.`
**Notes:** Full credit goes to [Flammenwerfer](https://www.fimfiction.net/user/83058/Flammenwerfer) for allowing me to use this amazing character.
### **Persona:** Tiamat
**System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.`
### **Persona:** Tsune
**System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.`
### **Persona:** Xala
**System Prompt:** `You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.`
## Prompt Format
Mistral's prompt format is so weird, but here it is:
```
[INST] You are a caring and empathetic sentient AI companion named Lyra.
Gryphe: Good day, Lyra.[/INST] Lyra:
```
## What's nest?
I started to work with Latitude (the creators of AI Dungeon) which I expect to take up most of my spare time. Further releases will therefore be delayed for now.
## Credits
- Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys!
- Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible!
- All the folks I chat with on a daily basis on Discord! You know who you are.
- Anyone I forgot to mention, just in case!
## Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
|
{"base_model": "mistralai/Mistral-Small-Instruct-2409", "language": ["en"], "license": "other", "license_name": "mrl", "license_link": "https://mistral.ai/licenses/MRL-0.1.md", "tags": ["instruct", "finetune", "chatml", "axolotl", "roleplay"]}
|
task
|
[
"SUMMARIZATION"
] | 46,762 |
kohendru/distilbert-base-uncased-amazon-sentiment-analysis
|
kohendru
|
text-classification
|
[
"pytorch",
"tf",
"safetensors",
"distilbert",
"text-classification",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:mit",
"model-index",
"region:us"
] | 2024-12-11T15:13:44Z |
2024-12-12T04:39:10+00:00
| 16 | 0 |
---
base_model:
- distilbert/distilbert-base-uncased
language:
- en
license: mit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- text-classification
widget:
- text: I love this product! It works great and has exceeded my expectations.
- text: Worst purchase ever. Completely useless and waste of money.
- text: The product is okay, but could be improved in terms of quality.
- text: Amazing! Will definitely buy again.
model-index:
- name: kohendru/distilbert-base-uncased-amazon-sentiment-analysis
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: amazon_reviews
type: text
config: default
split: test
metrics:
- type: accuracy
value: 0.9536
name: Accuracy
- type: precision
value: 0.953598
name: Precision Macro
- type: recall
value: 0.953612
name: Recall Macro
- type: f1
value: 0.9536
name: F1 Score Macro
---
# distilbert-base-uncased-amazon-sentiment-analysis
## Base Model
- [BERT](https://huggingface.co/google-bert/bert-base-uncased): BERT is a transformer-based model designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.
- [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased): DistilBERT is a smaller, faster, and more efficient version of BERT. It uses knowledge distillation to reduce the model size by approximately 60% while retaining 97% of BERT’s language understanding capabilities.
## Dataset
The dataset obtained from kaggle with title "[Amazon Reviews for Sentiment Analysis](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews)" by [Adam Bittlingmayer](https://www.kaggle.com/bittlingmayer).
The dataset contains columns "title," "text," and "label," with a total of 4,000,000 data entries (I only use 5% of the data for now).
### Dataset Example
| title | text | label |
|--------------------------------------------------:|--------------------------------------------------:|-------|
| Stuning even for the non-gamer | This sound track was beautiful! It paints the ... | 2 |
| The best soundtrack ever to anything. | I'm reading a lot of reviews saying that this ... | 2 |
| Amazing! | This soundtrack is my favorite music of all ti... | 2 |
| Excellent Soundtrack | I truly like this soundtrack and I enjoy video... | 2 |
| Remember, Pull Your Jaw Off The Floor After He... | If you've played the game, you know how divine... | 2 |
| ... | ... | ... |
| Unbelievable- In a Bad Way | We bought this Thomas for our son who is a hug... | 1 |
| Almost Great, Until it Broke... | My son recieved this as a birthday gift 2 mont... | 1 |
| Disappointed !!! | I bought this toy for my son who loves the "Th... | 1 |
| Classic Jessica Mitford | This is a compilation of a wide range of Mitfo... | 2 |
| Comedy Scene, and Not Heard | This DVD will be a disappointment if you get i... | 1 |
## Evaluation
When I try to train the model with a large number of epochs, it starts to overfit when the epoch reaches 6 or 7. So, I only use 5 epochs for this model.
| Epoch | Training Loss | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro |
|------:|--------------:|----------------:|---------:|----------------:|-------------:|---------:|
| 1 | 0.144200 | 0.139792 | 0.948575 | 0.948571 | 0.948583 | 0.948574 |
| 2 | 0.124400 | 0.145647 | 0.951650 | 0.951817 | 0.951709 | 0.951649 |
| 3 | 0.112900 | 0.148825 | 0.953600 | 0.953603 | 0.953616 | 0.953600 |
| 4 | 0.081200 | 0.155114 | 0.953925 | 0.953921 | 0.953932 | 0.953924 |
| 5 | 0.102400 | 0.171298 | 0.953600 | 0.953598 | 0.953612 | 0.953600 |
```py
results = trainer.evaluate()
print(results)
"""
{
'eval_accuracy': 0.953925,
'eval_precision_macro': 0.9539209871607255,
'eval_recall_macro': 0.9539319939428168,
'eval_f1_macro': 0.9539242719746999,
'eval_loss': 0.15511418879032135,
'eval_runtime': 90.9442,
'eval_samples_per_second': 439.83,
'eval_steps_per_second': 6.872,
'epoch': 5.0
}
"""
```
## How to use the model?
```py
from transformers import pipeline
model_name = "kohendru/distilbert-base-uncased-amazon-sentiment-analysis"
nlp = pipeline("text-classification", model=model_name, tokenizer=model_name)
reviews = [
"I love this product! It works great and has exceeded my expectations.",
"Worst purchase ever. Completely useless and waste of money.",
"The product is okay, but could be improved in terms of quality.",
"Amazing! Will definitely buy again."
]
for review in reviews:
result = nlp(review)
print(f"Review: {review}")
print(f"Sentiment: {result[0]['label']}, Confidence: {result[0]['score']:.4f}")
print("-" * 50)
"""
Review: I love this product! It works great and has exceeded my expectations.
Sentiment: Good Review, Confidence: 0.9950
--------------------------------------------------
Review: Worst purchase ever. Completely useless and waste of money.
Sentiment: Bad Review, Confidence: 0.9958
--------------------------------------------------
Review: The product is okay, but could be improved in terms of quality.
Sentiment: Bad Review, Confidence: 0.5947
--------------------------------------------------
Review: Amazing! Will definitely buy again.
Sentiment: Good Review, Confidence: 0.9942
--------------------------------------------------
"""
```
| null |
Non_BioNLP
|
# distilbert-base-uncased-amazon-sentiment-analysis
## Base Model
- [BERT](https://huggingface.co/google-bert/bert-base-uncased): BERT is a transformer-based model designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.
- [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased): DistilBERT is a smaller, faster, and more efficient version of BERT. It uses knowledge distillation to reduce the model size by approximately 60% while retaining 97% of BERT’s language understanding capabilities.
## Dataset
The dataset obtained from kaggle with title "[Amazon Reviews for Sentiment Analysis](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews)" by [Adam Bittlingmayer](https://www.kaggle.com/bittlingmayer).
The dataset contains columns "title," "text," and "label," with a total of 4,000,000 data entries (I only use 5% of the data for now).
### Dataset Example
| title | text | label |
|--------------------------------------------------:|--------------------------------------------------:|-------|
| Stuning even for the non-gamer | This sound track was beautiful! It paints the ... | 2 |
| The best soundtrack ever to anything. | I'm reading a lot of reviews saying that this ... | 2 |
| Amazing! | This soundtrack is my favorite music of all ti... | 2 |
| Excellent Soundtrack | I truly like this soundtrack and I enjoy video... | 2 |
| Remember, Pull Your Jaw Off The Floor After He... | If you've played the game, you know how divine... | 2 |
| ... | ... | ... |
| Unbelievable- In a Bad Way | We bought this Thomas for our son who is a hug... | 1 |
| Almost Great, Until it Broke... | My son recieved this as a birthday gift 2 mont... | 1 |
| Disappointed !!! | I bought this toy for my son who loves the "Th... | 1 |
| Classic Jessica Mitford | This is a compilation of a wide range of Mitfo... | 2 |
| Comedy Scene, and Not Heard | This DVD will be a disappointment if you get i... | 1 |
## Evaluation
When I try to train the model with a large number of epochs, it starts to overfit when the epoch reaches 6 or 7. So, I only use 5 epochs for this model.
| Epoch | Training Loss | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro |
|------:|--------------:|----------------:|---------:|----------------:|-------------:|---------:|
| 1 | 0.144200 | 0.139792 | 0.948575 | 0.948571 | 0.948583 | 0.948574 |
| 2 | 0.124400 | 0.145647 | 0.951650 | 0.951817 | 0.951709 | 0.951649 |
| 3 | 0.112900 | 0.148825 | 0.953600 | 0.953603 | 0.953616 | 0.953600 |
| 4 | 0.081200 | 0.155114 | 0.953925 | 0.953921 | 0.953932 | 0.953924 |
| 5 | 0.102400 | 0.171298 | 0.953600 | 0.953598 | 0.953612 | 0.953600 |
```py
results = trainer.evaluate()
print(results)
"""
{
'eval_accuracy': 0.953925,
'eval_precision_macro': 0.9539209871607255,
'eval_recall_macro': 0.9539319939428168,
'eval_f1_macro': 0.9539242719746999,
'eval_loss': 0.15511418879032135,
'eval_runtime': 90.9442,
'eval_samples_per_second': 439.83,
'eval_steps_per_second': 6.872,
'epoch': 5.0
}
"""
```
## How to use the model?
```py
from transformers import pipeline
model_name = "kohendru/distilbert-base-uncased-amazon-sentiment-analysis"
nlp = pipeline("text-classification", model=model_name, tokenizer=model_name)
reviews = [
"I love this product! It works great and has exceeded my expectations.",
"Worst purchase ever. Completely useless and waste of money.",
"The product is okay, but could be improved in terms of quality.",
"Amazing! Will definitely buy again."
]
for review in reviews:
result = nlp(review)
print(f"Review: {review}")
print(f"Sentiment: {result[0]['label']}, Confidence: {result[0]['score']:.4f}")
print("-" * 50)
"""
Review: I love this product! It works great and has exceeded my expectations.
Sentiment: Good Review, Confidence: 0.9950
--------------------------------------------------
Review: Worst purchase ever. Completely useless and waste of money.
Sentiment: Bad Review, Confidence: 0.9958
--------------------------------------------------
Review: The product is okay, but could be improved in terms of quality.
Sentiment: Bad Review, Confidence: 0.5947
--------------------------------------------------
Review: Amazing! Will definitely buy again.
Sentiment: Good Review, Confidence: 0.9942
--------------------------------------------------
"""
```
|
{"base_model": ["distilbert/distilbert-base-uncased"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["text-classification"], "widget": [{"text": "I love this product! It works great and has exceeded my expectations."}, {"text": "Worst purchase ever. Completely useless and waste of money."}, {"text": "The product is okay, but could be improved in terms of quality."}, {"text": "Amazing! Will definitely buy again."}], "model-index": [{"name": "kohendru/distilbert-base-uncased-amazon-sentiment-analysis", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_reviews", "type": "text", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9536, "name": "Accuracy"}, {"type": "precision", "value": 0.953598, "name": "Precision Macro"}, {"type": "recall", "value": 0.953612, "name": "Recall Macro"}, {"type": "f1", "value": 0.9536, "name": "F1 Score Macro"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,763 |
pranavpk/mt5-small-finetuned-amazon-en-es
|
pranavpk
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-03T08:08:38Z |
2024-12-04T03:08:23+00:00
| 21 | 0 |
---
base_model: google/mt5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0193
- Rouge1: 17.2135
- Rouge2: 8.3357
- Rougel: 16.8793
- Rougelsum: 16.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.6768 | 1.0 | 1209 | 3.2182 | 17.7584 | 9.2535 | 17.2471 | 17.2362 |
| 3.6447 | 2.0 | 2418 | 3.1029 | 17.5874 | 8.7799 | 16.9421 | 16.8519 |
| 3.4304 | 3.0 | 3627 | 3.0759 | 15.9059 | 7.5876 | 15.2891 | 15.3577 |
| 3.3128 | 4.0 | 4836 | 3.0706 | 17.1344 | 8.7748 | 16.6593 | 16.5961 |
| 3.2203 | 5.0 | 6045 | 3.0339 | 16.5542 | 7.7302 | 16.0354 | 16.081 |
| 3.1651 | 6.0 | 7254 | 3.0283 | 16.5324 | 8.0126 | 16.1407 | 16.1522 |
| 3.1387 | 7.0 | 8463 | 3.0188 | 16.7522 | 8.2367 | 16.4669 | 16.5025 |
| 3.1139 | 8.0 | 9672 | 3.0193 | 17.2135 | 8.3357 | 16.8793 | 16.9394 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0193
- Rouge1: 17.2135
- Rouge2: 8.3357
- Rougel: 16.8793
- Rougelsum: 16.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.6768 | 1.0 | 1209 | 3.2182 | 17.7584 | 9.2535 | 17.2471 | 17.2362 |
| 3.6447 | 2.0 | 2418 | 3.1029 | 17.5874 | 8.7799 | 16.9421 | 16.8519 |
| 3.4304 | 3.0 | 3627 | 3.0759 | 15.9059 | 7.5876 | 15.2891 | 15.3577 |
| 3.3128 | 4.0 | 4836 | 3.0706 | 17.1344 | 8.7748 | 16.6593 | 16.5961 |
| 3.2203 | 5.0 | 6045 | 3.0339 | 16.5542 | 7.7302 | 16.0354 | 16.081 |
| 3.1651 | 6.0 | 7254 | 3.0283 | 16.5324 | 8.0126 | 16.1407 | 16.1522 |
| 3.1387 | 7.0 | 8463 | 3.0188 | 16.7522 | 8.2367 | 16.4669 | 16.5025 |
| 3.1139 | 8.0 | 9672 | 3.0193 | 17.2135 | 8.3357 | 16.8793 | 16.9394 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
{"base_model": "google/mt5-small", "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 46,764 |
zhijian12345/marian-finetuned-kde4-en-to-zh_CN
|
zhijian12345
|
translation
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-zh",
"base_model:finetune:Helsinki-NLP/opus-mt-en-zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-12-06T10:58:12Z |
2023-12-06T11:48:19+00:00
| 125 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-zh
datasets:
- kde4
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-zh_CN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh_CN
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh_CN
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "Helsinki-NLP/opus-mt-en-zh", "datasets": ["kde4"], "license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-zh_CN", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 46,765 |
occupy1/distilbert-base-uncased-finetuned-emotion
|
occupy1
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-21T08:31:05Z |
2023-10-21T08:36:48+00:00
| 12 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.928
name: Accuracy
- type: f1
value: 0.9279328315860549
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2046
- Accuracy: 0.928
- F1: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7856 | 1.0 | 250 | 0.2989 | 0.907 | 0.9061 |
| 0.2392 | 2.0 | 500 | 0.2046 | 0.928 | 0.9279 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2046
- Accuracy: 0.928
- F1: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7856 | 1.0 | 250 | 0.2989 | 0.907 | 0.9061 |
| 0.2392 | 2.0 | 500 | 0.2046 | 0.928 | 0.9279 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.928, "name": "Accuracy"}, {"type": "f1", "value": 0.9279328315860549, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,766 |
x1saint/gte-small-tr
|
x1saint
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1416892",
"loss:SoftmaxLoss",
"loss:CoSENTLoss",
"tr",
"dataset:Turkish-NLI/legal_nli_TR_V1",
"dataset:emrecan/all-nli-tr",
"dataset:x1saint/sts",
"dataset:figenfikri/stsb_tr",
"arxiv:1908.10084",
"base_model:Supabase/gte-small",
"base_model:finetune:Supabase/gte-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-27T20:52:09Z |
2025-01-27T20:52:31+00:00
| 7 | 0 |
---
base_model: Supabase/gte-small
datasets:
- Turkish-NLI/legal_nli_TR_V1
- emrecan/all-nli-tr
- x1saint/sts
- figenfikri/stsb_tr
language:
- tr
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1416892
- loss:SoftmaxLoss
- loss:CoSENTLoss
widget:
- source_sentence: answers-forums
sentences:
- main-forums
- '2015'
- '"Yaklaşan kozmik dinlenme çerçevesine göre ... 371 km / s hızla Aslan takımyıldızına
doğru" hareket ediyoruz.'
- '0117'
- Başka bir nesneye göre olmayan bir 'hareketsiz' yoktur.
- '0.80'
- source_sentence: "\tDavacı vekili dava dilekçelerinde özetle; müvekkili tarafından\
\ taraflar arasındaki ticari ilişkiden kaynaklanan faturalar nedeniyle davalının\
\ müvekkiline toplam 46.991,00 TL borcu bulunduğunu, borcun ödenmemesi üzerine\
\ davalı aleyhine Ankara ... Müdürlüğünün 2019/15322 sayılı takip dosyası ile\
\ icra takibi başlatıldığını, davalının kötü niyetli olarak takibe itirazı üzerine\
\ takibin durduğunu, itirazın haklı nedenlere dayanmadığını belirterek itirazın\
\ iptaline, takibin devamına, %20'den aşağı olmamak üzere icra inkar tazminatına\
\ hükmedilmesine karar verilmesini talep ve dava etmiştir. "
sentences:
- 'Davacı vekili dava dilekçesinde özetle; davalı şirket ile müvekkili arasında
... E Blok adresindeki ofisin alçıpan, asma tavan, bölme duvar, giydirme duvarı
ve akustik alçıpan montajlarının eksiksiz ve tam olarak tamamlanması hususunda
02/05/2015 tarihinde montaj sözleşmesi imzalandığını, müvekkilinin sözleşme hükümlerini
yerine getirilerek montaj işlemlerini tamamladığını, ... E Blok adresindeki ofisin
alçıpan asma tavana, bölme duvar, giydirme duvar ve akustik alçıpan montajlarının
karşılığı olarak ödenmesi gereken hakkediş bedeli olan 20.000,00 TL''nin ödenmediğini,
müvekkiline yaptığı işin karşılığı olarak ödemenin eksik yapıldığından davalı
aleyhine ... Müdürlüğünün ... sayılı dosyası ile takip başlatıldığını, davalının
icra takibine itirazı üzerine takibin durduğunu, itirazın haklı nedenlere dayanmadığını
belirterek davalının borca ve yetkiye itirazının iptaline, takibin devamına,
%20''en az olmamak üzere icra inkar tazminatına hükmedilmesine karar verilmesini
talep ve dava etmiştir. '
- Davacı vekili dava dilekçesinde özetle, müvekkili şirketin eser sözleşmesi
kapsamında keşidecisi ... Dekorasyon ve Elektrik Ltd.Şti., 8012249 çek nolu 900.000,00
TL meblağlı çek verildiğini, çekin müvekkili firma uhdesinde iken kaybolduğunu
belirterek bu çek üzerine ödeme yasağı konulmasına ve dava konusu çek hakkında zayii
belgesi verilmesini talep ve dava etmiştir.
- 'Davacı vekili dava dilekçesinde özetle; Davacı şirket merkezine üçüncü şahıs
tarafından usulsüz haciz uygulandığını, davacı şirket adresine gelen ... A.Ş.
firması yetkilileri büroya geldikten sonra haciz mahallini kendileri çilingir
vasıtasıyla açtığını, bu uygulama sırasında hiçbir yetkili ya da üçüncü şahıs
yokken büro eşyalarının haczedildiğini ve muhafaza altına alındığını, bir kısım
kıymetli evrak, defter ve kayıtların da zayi edildiğini, işbu nedenle ticari işletmede
bulunan belgelerin zayi olduğunu, Türk Ticaret Kanunu’nun 82/7. Ve sair maddeleri
çerçevesinde davacı tarafa zayi belgesi verilmesini talep ve dava etmiştir. '
- source_sentence: Davacı vekili dava dilekçesinde özetle; .... Tİc. Ltd. Şirketi
tarafından keşide edilen 08.05.2020 tarih, 25.000TL bedelli ve ... ... Cad.
Şubesine ait ... numaralı çek, .... Tİc. Ltd. Şirketi tarafından keşide edilen 13.06.2020
tarihli ve 65.000TL bedelli, ... ... Cad. Şubesine ait ... numaralı çekleri
müvekkili alışveriş esnasında kredi kartını kullanması sebebiyle cüzdanın yanında
olduğunu hatırladığını, eve geldiğinde cüzdanını bulamadığını, cüzdanında kredi
kartları ve bir miktar para ile çekleri kaybettiğini belirterek çeklerin iptaline
karar verilmesini talep ile dava etmiştir.
sentences:
- ' Borçlu, devri öğrendiği sırada devredene karşı
sahip olduğu savunmaları, devralana karşı da ileri sürebilir.
Borçlu, devri öğrendiği anda muaccel olmayan alacağını, devredilen
alacaktan önce veya onunla aynı anda muaccel olması koşuluyla borcu ile takas
edebilir. '
- 'Davacı vekili dava dilekçesinde özetle; Müvekkilinin hamili olduğu ------ Şubesi''ne
ait, keşidecisinin ---------. olduğu,---- seri/çek no''lu ------ tutarındaki-----------
keşide tarihli çek zayi olduğunu, müvekkilin telafisi güç ve hatta imkansız zararlara
uğramaması için ihtiyati tedbir kararı verilerek çekin ödenmemesinin durdurulmasına
ve davaya konu çekin iptaline karar verilmesini talep ve dava etmiştir. '
- ' Davacı vekili dava dilekçesinde özetle; dava dışı sigortalı ... A.Ş.''ye ait,
müvekkili sigorta şirketinden Kasko poliçesi ile sigortalı ... plakalı aracın,
... tarafından oto yıkama hizmeti almak üzere davalıya ait işyerine 09.12.2013
günü bırakıldığını ve işyerinden çalındığını, iş bu olay üzerine araç kullanıcısı
tarafından Göktürk Polis Merkezine başvurulduğunu, aracın bulunamaması üzerine
müvekkili sigorta şirketi tarafından aracın rayiç değeri olarak tespit edilen
200.000,00 TL''nin sigortalıya 04.06.2014 tarihinde ödendiğini ve TTK 1472. maddesinde
açıklanan halefiyet kuralı gereği sigortalısına yaptığı ödemeyi davalı taraftan
talep ettiğini ancak davalı taraftan herhangi bir cevap alamadığını, alacağın
tahsili için İstanbul ...icra Müdürlüğünün ... E. Sayılı dosyası ile girişilen
takibe davalı borçlunun borca itirazı nedeniyle itiraz edilen alacak miktarı
için itirazın iptaline, takibin devamına, davalının %20''den aşağı olmamak kaydıyla
icra inkar tazminatına mahkum edilmesine karar verilmesini dava ve talep etmiştir.
Davalı vekili cevap dilekçesinde özetle; Olayın meydana geldiği yerin müvekkili
şirkete ait oto yıkama faaliyetinin yapıldığı iş yeri olduğunu, olayın yıkama
için bırakılan ... plakalı aracın gasp edilerek çalınması ile meydana geldiğini,
olayın faillerinin belli olduğunu takipte yer alan diğer borçlular olduğunu, İstanbul
14.Ağır Ceza Mahkemesi 2015/201 E.sayılı dosyası ile dava açıldığını, olayın gerçekleşme
şekli itibari ile müvekkilinin ve işyerinde yıkama faaliyetinde bulunan çalışanların
kusur ve ihmali söz konusu olmadığını, 3. kişilerce gasp edilmek suretiyle çalınan
araç için ödenen bedelin taraflarından rücuen talep edilmesinin mümkün olmadığını
belirterek davanın reddine karar verilmesini talep etmiştir. Mahkemece yapılan
yargılama sonucunda, "Davanın kısmen kabulü, kısmen reddi ile, Davalının İstanbul
... İcra Müdürlüğünün ... Esas sayılı takibe itirazının kısmen iptaline, takibin
kaldığı yerden asıl alacak 200.000,00 TL ve faiz üzerinden devamına, işlemiş
faiz talebi bakımından ispat olunamayan 13.857,53 TL için davanın reddine, şartları
oluşmayan icra-inkar tazminat talebin reddine," karar verilmiştir. Bu karara karşı
davacı vekili ve davalı vekili istinaf başvurusunda bulunmuştur. Davacı vekili
istinaf başvuru dilekçesinde özetle; yerel mahkeme tarafından takipten önce davalının
temerrüde düşürüldüğünün ispat edilememesi nedeniyle işlemiş faiz talebi bakımından
davanın reddine karar verilmesinin somut olayın niteliğine ve hukuka açıkça aykırılık
teşkil ettiğini belirterek istinaf yasa yoluna başvurmuştur. Davalı vekili istinaf
başvuru dilekçesinde özetle; verilen karar usul ve yasaya aykırı olduğunu, belirtmiş
oldukları gerekçeler ve kararın esasına etki eden taleplerinin dikkate alınmadığını,
açık yasal düzenlemelerin hiçbir şekilde irdelenmediğini, davalı tarafın davaya konu
hırsızlık suçunun işlenmesinde herhangi bir ihmal ve kusurunun bulunmadığını,
müvekkillerinin ve çalışanlarının konu suç olayında bir kusuru bulunmadıklarını,
sanıkların çalışanları darp etmek suretiyle aracı çaldıklarının açık olduğunu,
her ne kadar TMK 74. madde gereğine dayanarak yerel mahkemece hüküm kurulmuş ise
de hukuk mahkemeleri maddi vakıalarla bağlı olsa da sanıkların mahkumiyet ve beraat
kararlarıyla bağlı olmadığını, bu nedenle salt sanıkların kendilerini kurtarmak
amacıyla verdikleri soyut beyanlarına itibar edilerek karar verilmesinin hukuka
aykırı olduğunu, garaj ve otopark işletenin motorlu taşıtını bırakanın taşıtına
ve eklentilerine gelen zarardan sorumluluğu TBK’nda kusursuz sorumluluk olarak
düzenlendiğini, bununla birlikte, bazı hallerde bu sorumluluğun sınırlandırılması
bazı hallerde ise tamamen kaldırılması yönünde hükümlere de yer verildiğini, TBK''nın
579. maddesi kusursuz sorumluluğu miktar itibariyle sınırlandırdığını, kabul
anlamına gelmemek kaydıyla sorumlu tutulacak olsa dahi müvekkilinin kusursuz olduğundan
bahisle üst sınırdan sorumlu tutulması gerektiğini belirterek istinaf yasa yoluna
başvurmuştur. Dava kasko sözleşmesinden kaynaklanan tazminat istemine ilişkin
olup istinaf açısından uyuşmazlık konusu HMK''nın 355. maddesine göre kamu düzeni
ve istinaf nedenleri ile sınırlı olmak üzere İlk Derece Mahkemesince verilen kararın
usul, yasa ve dosya içeriğine uygun olup olmadığıdır. Davacıya kasko sigortalı
bulunan aracın davalının işlettiği oto yıkama işyerine bırakılması ile sigortalı
araç sürücüsü ile oto yıkama işletmecisi arasında 6098 sayılı TBK''nun 561 vd.
maddelerinde düzenlenmiş olan vedia (saklama) sözleşmesi ilişkisi kurulmuştur.
TBK''nun 561 vd. maddelerinde düzenlenen vedia akdi gereği, menkul bir malı saklamak
üzere alan malı aldığı şekliyle teslim etmekle yükümlüdür, kanunun kendine yüklediği
yükümlülüğe uymayan saklayan bu nedenle oluşacak zararlardan sorumludur. TBK''nın 579
maddesi uyarınca da sorumluluğu vardır. Davacıya kasko sigortalı aracın davalıya
ait oto yıkamada bulunduğu sırada çalındığı hususları taraflar arasında ihtilaf
konusu değildir. Taraflar arasında ihtilaflı olan husus, sigortalı aracın çalınması
olayında davalının kusurunun bulunup bulunmadığı noktasındadır. Bu durumda mahkemece,
davaya konu rücuen tazminat isteminin dayanağı olan, davacının sigortaladığı aracın
çalınması olayı ile ilgili olarak İstanbul 14. ACM 2015/201 E. 2017/46 karar sayılı
kararı ile"Mülkiyeti ... AŞ isimli tüzel kişiliğe ait olup, ... AŞ adlı başka
bir şirkete kiralanan ve suç tarihi olan 09/12/2013 günü şirket çalışanı özel
şoför ...''ın kullanımında olan ... plaka sayılı 2012 model ... marka kiralık
otomobilin ... adlı şoför tarafından olay tarihinde gün içerisinde Eyüp / Göktürk
Polis Merkezi Amirliği mıntıkasında yer alan, mağdur tanık ve diğer tanıkların
çalışanı olduğu Selanik Bulvarı üzerindeki ... adlı işyerine yıkatmak için bırakıldığı,
evvelinde de birlikte çok sayıda otomobil hırsızlığı gerçekleştiren ve deyim yerindeyse
bizatihi ...''in beyanına nazaran profesyonel oto hırsızları olan sanıklar ...
ve ...''ın yanlarında ... isimli açık kimlik bilgileri tam olarak tespit edilemeyen
3.bir şahıs olduğu halde ... marka başka bir araç ile araç yıkatma bahanesi ile
oto yıkamacıya geldikleri, ... isimli kimliği meçhul failin araç içerisinden inmediği,
yıkamacı çalışanları ... ve ...''ın başka işlerle ilgilenmesi sırasında bu boşluktan
faydalanan sanık ...''in anahtarlık yerinde asılı halde bulunan suça konu aracın
kontak anahtarını fark ettirmeden bulunduğu yerden aldığı, diğer sanık ...''ın
ise direksiyon tarafına geçtiği, aracın kilitli kapılarını açıp çalıştırıp hareket
ettirerek birlikte hızla olay yerinden ayrıldıkları, daha sonra çaldıkları aracı
12.500 - 15.000-TL bir bedel ile ... isimli çalıntı araç parçaları satın alan
bir şahsa sattıkları, olayın oluş ve meydana geliş biçiminin bu şekilde cereyan
ettiği vicdani sonuç ve kanısına varılmakla..." gerekçesi ile dava dışı üçüncü
kişiler ... ve ... hakkında hırsızlık suçundan cezalandırılmalarına karar verildiği
kararın kesinleştiği görülmüştür. Yargıtay’ın yerleşik uygulamasına ve öğretideki
genel kabule göre, maddi olgunun tespitine ilişkin ceza mahkemesi kararı hukuk
hakimini bağlar. Ceza mahkemesinde bir maddi olayın varlığı ya da yokluğu konusundaki
kesinleşmiş kabule rağmen, aynı konunun hukuk mahkemesinde yeniden tartışılması
olanaklı değildir (HGK''nun 11.10.1989 gün ve E:1989/11-373, K:472, HGK''nun
27.04.2011 gün ve E:2011/17-50, K:2011/231 sayılı ilamları). Türk Borçlar Kanunu''nun
74. maddesi gereğince, hukuk hakimi ceza hakiminin tespit ettiği kusurla bağlı
değil ise de Ceza Mahkemesince tespit edilen fiilin hukuka aykırılığı ve illiyet
bağını saptayan maddi vakalar yönünden Ceza Mahkemesi kararı ile bağlıdır. Bu
kapsamda ceza mahkemesince maddi vaka değerlendirilirken olayın oluşunun belirtildiği,
bu kararın kesinleşmiş olması durumunda bu maddi olgu artık hukuk mahkemesi için
de bağlayıcı niteliktedir. Bu hususa değinen istinaf talebi yerinde değildir.
Ancak ceza dosyası kapsamında davaya konu olay kapsamında davalının kusuru bulunup
bulunmadığı yönünden bir değerlendirme yapılmadığı görülmüştür. Bu nedenle mahkemece İstanbul
14. ACM 2015/201 E. 2017/46 karar sayılı dosya aslının celbi sağlanarak, olay
yeri kayıtların, iş yerinin çalışma şekli, müşteri araçlarının anahtarlarının
tutulduğu yer ve bu yerin nasıl korunduğu, anahtarların nasıl muhafaza edildiği
tespit edilerek davalı oto yıkama işletmecisinin kusuru tespit edilmeden ve TBK''nın
579/2 maddesinde belirtilen şartlar değerlendirilmeden karar verilmesi eksik incelemeye
dayalı olmuştur. Trafik kazaları, nitelikleri itibariyle haksız fiillerdendir.
Haksız fiillerde temerrüt tarihi, haksız fiilin meydana geldiği tarih olup, zarar
sorumlusunun ayrıca ihbar ve ihtar edilmesine gerek yoktur. Sigorta ettirenin
dava hakkı tazmin ettiği bedel nispetinde sigortacıya intikal eder. Ödeme tarihi
aynı zamanda 3. şahsa rücu edebilme tarihidir. Bu nedenle işleten ve sürücünün
faizden sorumluluğunun başlangıcının halefiyet başlangıcı olan ödeme tarihi olarak
kabulü gerekir. Bu hale göre sigorta şirketinin sigortalısına ödeme tarihinden
takip tarihine kadar işlemiş faizin hesaplanarak hüküm altına alınması gerekirken
yazılı şekilde karar verilmiş olması isabetli olmamıştır (Yargıtay 17. Hukuk Dairesinin
2013/21198 E. ve 2014/1568 K.sayılı kararı). Açıklanan nedenlerle, davacı vekili
ile davalı vekilinin istinaf başvurusunun kabulü ile HMK''nın 353/1-a/6. maddesi
uyarınca İlk Derece Mahkemesi kararının kaldırılmasına, dosyanın yukarıda belirtilen
şekilde işlem yapılmak üzere mahkemesine gönderilmesine karar verilmiştir.'
- source_sentence: 'Davacı vekili dava dilekçesinde özetle; davalı- borçlu ile müvekkili
arasında, davalı- borçlu tarafından işletilen "..." isimli işletmesinde müvekkil
şirkete ait mamullerin satışı ile ilgili olarak 28/01/2019 tarihli Satış Noktası
Sözleşmesinin imzalandığını, müvekkili olduğunu şirketin sözleşmede kararlaştırılan
bütün edimlerini eksiksiz olarak yerine getirdiğini, kendisinden talep edilen
ürün teslimlerini zamanında yaptığını, ürünlerin müşterilerine sağlıklı bir şekilde
sunulabilmesi için soğutucuların teslim edildiğini, sözleşmede kararlaştırılan
iskontoların uyguladığını, yine sözleşmenin Ek Özel Şartının 5. Maddesi gereğince
yükümlendiği nakit yardımı- kdv dahil 23.600,00-TL ''yi davalıya verdiğini, fakat
davalı- borçlunun şirket sözleşmesinde kararlaştırılan yükümlülüklerini yerine
getirmediğini cari hesap borcunu vadesinde ödemediğini, sözleşmenin özen borcundan
belirtilen aylık olarak en az 84 kasa koli ürün kotasını doldurmadığını, sözleşmede
kararlaştırılan 2000 kasa koli ürün kotasını doldurmadan ürün alımını kestiğini,
davalı- borçlunun sözleşmeye aykırı davranışı nedeniyle nakit yardımının iadesi
ve cari hesap borcu için İzmir ... İcra müdürlüğünün .../... esas sayılı dosyasıyla
ilamsız icra takibini yaptığını davalı- borçlunun söz konusu takibe itiraz etmesi
üzerine takibin durdurulmasına karar verildiğini, yukarıda açıklanan nedenler
ile davalı- borçlunun haksız ve kötüniyetli olarak takibi sürüncemede bırakmak
kastıyla borca ve tüm ferilerine itiraz ettiğini ve takibin durdurulmasına neden
olduğunu, bu nedenle davalı- borçlular aleyhine %20''den az olmamak üzere icra
inkar tazminatına hükmedilmesini, yargılama giderleri ile vekalet ücretini davalı
tarafa yükletilmesini talep etmiştir. '
sentences:
- 'Pay sahiplerinin çağrı
veya gündeme madde konulmasına ilişkin istemleri yönetim kurulu tarafından reddedildiği
veya isteme yedi iş günü içinde olumlu cevap verilmediği takdirde, aynı pay sahiplerinin
başvurusu üzerine, genel kurulun toplantıya çağrılmasına şirket merkezinin bulunduğu
yerdeki asliye ticaret mahkemesi karar verebilir. Mahkeme toplantıya gerek görürse,
gündemi düzenlemek ve Kanun hükümleri uyarınca çağrıyı yapmak üzere bir kayyım
atar.
Kararında, kayyımın, görevlerini ve toplantı için gerekli belgeleri hazırlamaya
ilişkin yetkilerini gösterir. Zorunluluk olmadıkça mahkeme dosya üzerinde inceleme
yaparak karar verir. Karar kesindir.'
- ' Alıcı, devraldığı satılanın durumunu işlerin olağan
akışına göre imkân bulunur bulunmaz gözden geçirmek ve satılanda satıcının
sorumluluğunu gerektiren bir ayıp görürse, bunu uygun bir süre içinde ona
bildirmek zorundadır.
Alıcı gözden geçirmeyi ve bildirimde
bulunmayı ihmal ederse, satılanı kabul etmiş sayılır. Ancak, satılanda olağan
bir gözden geçirmeyle ortaya çıkarılamayacak bir ayıp bulunması hâlinde, bu
hüküm uygulanmaz. Bu tür bir ayıbın bulunduğu sonradan anlaşılırsa, hemen
satıcıya bildirilmelidir; bildirilmezse satılan bu ayıpla birlikte kabul edilmiş
sayılır.'
- Davacı vekili dava dilekçesinde özetle; Davacı vekilinin 15.01.2021 harç ikmal
tarihli dava dilekçesinde özetle; müvekkil aleyhine ... İcra Müdürlüğünün
11.01.2021 tarih ... E Sayılı dosyası üzerinden başlatılan haksız takibe konu çeke ilişkin müvekkilin
borçlu olmadığının tespitine, müvekkil aleyhine başlatılan haksız icra takibinin
müvekkil şirketin yetkili hamil olması ve yetkisiz olan davalıya diğer borçlular bakımından ödeme yapılması
durumunda müvekkil alacağını tahsil imkanı tehlikeye gireceğinden (... Kon.
Tekstil Ltd Şti hariç) tüm borçlular bakımından durdurulması yönünden ihtiyati
tedbir kararı verilmesi , takibe konu çekin müvekkil şirkete iade edilmesi talebinde
bulunma gereği hasıl olduğu, müvekkil şirketin faaliyet gösterdiği ... İş ...
Sn Tic. J Blok No 12-13 .../İstanbul adresinde henüz kimliği bilinmeyen kişiler tarafından
Hırsızlık hadisesi meydana geldiği, hırsızlık olayıyla hamili lehtarı müvekkil
şirket olan çekler çalındığı, ... Polis Merkezi Amirlğine şüpheliler şikayet edildiği,
olaya ilişkin ... C. Başsavcılığının ... Soruşturma dosyası üzerinden devam
edildiği, ayrıca ... 1 ATM ... E Sayılı dosyasından Çek zayi nedeniyle çek iptali davası
açıldığı, davaya konu çeklere ilişkin toplam 52.515.76 TL teminat yatırıldığı,
dosyaya konu çeklere ilişkin ödemeden men yasağı kararı verildiği, karar ilgili
bankalara müzekkere ile bildirildiği, ... tarafından düzenlenen ... Bankası /... İstanbul Şb.
... Iban nolu hesaba ait 31.12.2020 keşide tarihli ... nolu 5.000 TL bedelli
çek de nu davaya konu çeklerden biri olduğu, çeke ilişkin ödeme yasağı konulduğu,
İcra takibine dayanak olan çek üzerinde de belirtildiği, konulan kayıtta “çekin karşılığı yoktur
TC ... 1 ATM 11.09.2020 tarih ... E Sayılı yasağı gereğince çek hakkında her
hangi bir işlem yapılmayarak iade edilmiştir” yazılı olduğu, müvekkilin hamili/lehtarı
olduğu çekler ticari ilişkisi olduğu diğer firmalara verilmek üzere cirolu ve
imzalı bir şekilde kasasında muhafaza edilmekte iken kimliği belirsiz kişilerce çalındığı, dolayısıyla
çek üzerindeki yer alan imza müvekkile ait olduğundan icra Hukuk Mahkemesine başvurulmadığı, Zira
İcra Hukuk Mahkemesi dar yetkili olup sadece şekli inceleme yapma yetkisi
mevcut olduğundan davalı aleyhine huzurdaki dava ikame edildiği, hırsızlık suçuna
ilişkin çeklerden bazıları bankalar ile Faktoring kuruluşlarına ibraz edildiğinde
bankalar ve faktöring kuruluşlarınca bilgi verildiği, çek iptaline konu çeklerin henüz davalıya
geçmediği bir zaman diliminde ciro zincirinde davalının üstünde yer alan
... Kon Tekstil Ltd Şti’ce bankalara ve faktöring firmalarına ibraz edilmeye çalışıldığı,
bunun öğrenilmesi ile ... C. Başsavcılığına ... Sayılı dosyası talepte bulunulduğu,
31.11.2020 tarihinde Savcılık şirketin eski ortağı ... dinlenilmesi için müzekkere
yazıldığı, ancak bu kişi henüz dinelemediği, müvekkil ile ... Kon Ltd Şti arasında
her hangi bir ticari ilişki bulunmadığı, müvekkil davaya konu çeki ... Kon Ltd
Şti’ne ciro edip vermediği, bu nedenle müvekkilden sonra sonra çek üzerindeki ciro
silsilesi bozulduğu, davalı şirkette çek bakımından yetkili hamil sıfatına haiz
olmadığı, müvekkil aleyhine başlatılan haksız icra takibi öncesinde çek iptali davasına
teminat yatırılmış olması nedeniyle teminatsız olarak ve halihazırda icra takibine
konu çekin iptaline ilişkin davanın derdest oluşu ile diğer borçlularca borcun
ödenmesi ihtimaline de müvekkilin alacağını tahsil imkanının tehlike altına
girmesi ihtimaline binaen ... Kon Ltd Şti hariç tüm borçlular bakımından
durdurulması gereği hasıl olduğu, ayrıca davalı tarafından başlatılan icra takibinde borçlu
olan şirketlere yönelik ihtiyati haciz kararı talep edilmiş ve henüz şirketler
aleyhine ihtiyati haciz kararı verilmemişse de müvekkil şirketin haksız ve mesnetsiz şekilde haciz tehdidi
altında olduğu, Davalı tarafça ... ATM ... D.İş sayılı dosyasına henüz
teminat yatırılmamış olup söz konusu teminatın yatırılması halinde davalıya iade
edilmesine muvafakat edilmediği,TTK.792 m. Gereğince çeki kötü niyetli elde bulunduranın çek, geri vermekle
yükümlü olduğu, arz ve izah edilen nedenlerle; müvekkilin çalıntı çeke dayalı
yetkisiz hamil tarafından haksız yere başlatılan İcra takibi nedeniyle zarara
uğramasını önlemek amacıyla ... 1. ATM ... E Sayılı dosyasına teminat yatırılmış
olunması sebebiyle ... İcra Md ... E Sayılı dosyasından başlatılan takibin yargılama
sonuna kadar teminatsız olarak takibin tedbiren durdurulmasına, aksi kanaate
olunur ise; Uygun teminat karşılığında takibin tedbiren durdurulmasına, müvekkilin
çekten kaynaklanan alacağının tahsil imkanının tehlike altına girmesi ihtimali
kuvvetle muhtemel olması nedeniyle durdurma kararının ... Kon Ltd Şti hariç tüm
borçlular adına verilmesini, TTK.792 gereğince müvekkilin yetkili hamil olduğu çekin iadesine, yargılama
giderleri, vekalet ücretinin davalıya yüklenmesine, davalı aleyhine %20 tazminata
hükmedilmesine karar verilmesi talep ve dava etmiştir.
- source_sentence: answers-forums
sentences:
- '1017'
- main-forums
- '1.80'
- Pek çok çocuk, ödülle motive olmak yerine, kontrol altında olmaktan motive olur.
- Bir olasılık, ev işleri için ödül (ler) i belirleme amacını taşıyan bir aile toplantısı
yapmaktır.
- '2015'
model-index:
- name: SentenceTransformer based on Supabase/gte-small
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.42703730702392106
name: Pearson Cosine
- type: spearman_cosine
value: 0.434696021205193
name: Spearman Cosine
---
# SentenceTransformer based on Supabase/gte-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Supabase/gte-small](https://huggingface.co/Supabase/gte-small) on the [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1), [stsb](https://huggingface.co/datasets/emrecan/all-nli-tr) and [x1saint](https://huggingface.co/datasets/x1saint/sts) datasets. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Supabase/gte-small](https://huggingface.co/Supabase/gte-small) <!-- at revision 93b36ff09519291b77d6000d2e86bd8565378086 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1)
- [stsb](https://huggingface.co/datasets/emrecan/all-nli-tr)
- [x1saint](https://huggingface.co/datasets/x1saint/sts)
- **Language:** tr
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("x1saint/gte-small-tr")
# Run inference
sentences = [
'answers-forums',
'2015',
'1017',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.427 |
| **spearman_cosine** | **0.4347** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### all-nli-pair-class
* Dataset: [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1) at [67baa14](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1/tree/67baa141cf4f6634c983d77eea193c5535611e5a)
* Size: 474,283 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 19 tokens</li><li>mean: 419.29 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 401.34 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~40.80%</li><li>1: ~42.60%</li><li>2: ~16.60%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Davacı tarafından davalı aleyhine açılan İtirazın İptali davasının mahkememizde yapılan açık yargılaması sonunda dosya incelendi. AÇILAN DAVA VE İDDİA :Davacı vekilinin dava dilekçesinde özetle; Müvekkilinin EPDK'dan (Enerji Piyasası DenetlemeKurumu) aldığı onay ile Eylül 2012 den bu yana tüm Türkiye'de elektrik enerjisi tedariki ve toptan satış hizmeti sunduğunu, davalıdan da davacı şirket ile akdettiği sözleşmeye binaen müvekkili şirkketten satın aldığı elektrik ödemelerini aksattıığı düzenlenen faturaları ödemedğinden temerrüde düştüğünü, davacı tarafından defalarca uyarılmasına rağmen de borcunu ödemedeğini bunün üzerine müvekkili İstanbul ... İcra müdürlüğünün ... Esas sayılı dosyasıda ilamsız icra takibi başlattığını davalının borca kötü niyetli olarak itiraz ettiğini ve takibin durduğunu itirazın iptali ile takibin devamına davalı hakkında haksız ve kötü niyetli irizları nedeniyle %20 den aşağı olmamak üzere icra inkar tazminatına hükmedilmesine ve yargılama gideri ile vekale...</code> | <code>Davacı vekili dava dilekçesinde özetle;Müvekkili ...'a karşı halihazırda 17/07/2018'de açılmış .... İcra Dairesi'nde ... Esas Sayılı dosya ile devam eden bir icra dosyası bulunduğunu, bu icra dosyası kapsamında 12/11/2018'den beri müvekkilinin maaşına haciz uygulandığını, dosya ödeme emrinde dosyanın dayanağı, "(Kredi kartı borcu) .... İcra-... Esas dosyalarından kaynaklanan alacağın takipte ve tahsilde tekerrür olmamak üzere tahsili talebidir." şeklinde yazıldığını, müvekkili ...'ın, 2003 yılında kimliğinin çalınarak bazı bankacılık ve telefon işlemlerinde kullanıldığını, adına kredi çekildiğini, kredi kartı çıkarıldığını, telefon hattı açıldığını ve o dönemde bu konuda şikayette bulunduğunu, ... Cumhuriyet Başsavcılığı'nca 28/01/2004 suç tarihli ... soruşturma numaralı dosyasına ulaşıldığını, bu dosyada, müvekkilinin şüpheli olarak görünmekte iken şikayetçi ...A.Ş.' olduğunu, yapılan soruşturma sonucunda gerçek şüpheli şahısların ortaya çıkarılamadığı, fakat müvekkilinin suçlu olmad...</code> | <code>0</code> |
| <code>Davacı vekili dava dilekçesinde özetle; müvekkili şirket tarafından,----işbu sözleşmeye istinaden düzenlenen ---- ait alüminyum levha emtiasının, davalı taşıyıcı şirket tarafından, ---- tarihinde, dava dışı sigortalı firmanın ------ fabrikasından yüklenildiğini, davalı taşıyıcı firmanın sorumluluğunda, --- nakli gerçekleşen toplam ---; net ağırlığı --- uygun ambalajlar ile nakledilen emtiaların, gümrük işlemleri sonrası--- alıcı şirket tarafından --- tarihinde teslim alındığı ancak teslim esnasında ------paket no’lu levhaların ıslanması sebebi ile emtianın hasara uğramış olduğu tespit edilerek taşıma senedine ihtirazi kayıt düşüldüğü ve bu levhaların hurda edilmek üzere ayrıldığını, davalı taşıyıcı şirketin sorumluluk sahasında gerçekleşen işbu hasar sonrası, bağımsız ve uzman eksper tarafından yapılan incelemelere istinaden tanzim edilmiş olan ekspertiz raporunda; hasar nedeninin, emtianın taşıyıcının sorumluluğunda bulunduğu esnada ıslanarak hasara uğramış olmasından, ıslanan paketi...</code> | <code>Davacı vekili dava dilekçesinde özetle; Müvekkili------- ------------- tarihinde davalının------ aracın çarpması nedeniyle hasara uğradığını, meydana gelen kazada davalının %100 kusurlu olduğunu, müvekkili şirket tarafından zarar gören araç için ------ hasar tazminatı ödendiğini, yapılan incelemeler neticesinde davalının sigortacısı olduğu aracın kusurlu olduğunun tespit edildiğini, kaza neticesinde ------ aracın ---- geldiğini, buna göre aracın piyasa değerinin tespit edildiğini ve tespit edilen değerin ------------ tarafından, kalan ------ ise -----tarafından ödendiğini, ayrıca, -----aracın hasarı sırasında ------ kırılması,---- durdurulamaması nedeniyle ------- hasarın tespitinin de ayrıca gerekli hale geldiğini, bu nedenle müvekkili --------- hasarının tespiti için---------------nedeniyle-------- daha ödendiğini, davalının, kusurlu --------------- nedeniyle davalı tarafa başvurulduğunu, davalı tarafın --------- hiçbir gerekçesi olmaksızın ödemediğini, müvekkili şirket tarafından 1....</code> | <code>1</code> |
| <code>Davacı vekili dava dilekçesinde özetle, müvekkili şirketin keşidecisi olduğu ----------------- Taşdelen Şubesine ait, ---- seri numaralı, 17.02.2019 vade tarihli, 50.000,00-TL bedelli çeki lehtara vermek üzere hazırlandığını ancak müvekkili şirket yetkilisinin cüzdanını kaybetmesi suretiyle çeklerin zayi olduğunu, söz konusu çeklerin kötü niyetli üçüncü kişilerin eline geçmesi halinde müvekkilinin mağdur olacağını, bu nedenle ödemeden men talimatı verilmesini ve zayi edilen çekin iptaline dair karar verilmesini talep ve dava etmiştir.</code> | <code>Davacı vekili dava dilekçesinde özetle; ... plakalı araç ... sayılı Genişletilmiş Kasko Sigortası Poliçesi ile müvekkili şirkete, sigortalı olduğunu, hadisenin, 14/06/2017 tarihinde ... plakalı aracın ... ... ... yolu üzerinde seyir halinde iken önünde seyir halinde bulunan sigortalı ... plakalı aracın trafik nedeniyle duraksaması nedeniyle duramayarak çarpması akabinde sigortalı ... plakalı aracın önünde seyir halinde bulunan ... plakalı araca, onun da önünde seyir halinde bulunan ... plakalı araca arkadan çarpması ve bu araçların sırasıyla ... aracın arkaya ... plakalı araca onun da duramayarak ... plakalı araca arkadan çarpması neticesinde çoklu maddi hasarlı trafik kazası meydana gelmiştir, Davalı/Borçlu ... sigortalısı olan ... plakalı aracın, müvekkil şirket sigortalısı olan ... Plakalı araca çarpması neticesinde maddi hasar aldığını, sigortalının, yapmış olduğu başvuru neticesinde Hasar gören sigortalı araca yaptırılan ekspertiz incelemesi sonucunda aracın hasarlı olduğunun tesp...</code> | <code>0</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/emrecan/all-nli-tr) at [daeabfb](https://huggingface.co/datasets/emrecan/all-nli-tr/tree/daeabfbc01f82757ab998bd23ce0ddfceaa5e24d)
* Size: 941,086 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 47.0 tokens</li><li>max: 301 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 25.29 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.48</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|:-----------------|
| <code>Kavramsal olarak krem kaymağının iki temel boyutu vardır - ürün ve coğrafya.</code> | <code>Ürün ve coğrafya krem kaymağını işe yarıyor.</code> | <code>0.5</code> |
| <code>Mevsim boyunca ve sanırım senin seviyendeyken onları bir sonraki seviyeye düşürürsün. Eğer ebeveyn takımını çağırmaya karar verirlerse Braves üçlü A'dan birini çağırmaya karar verirlerse çifte bir adam onun yerine geçmeye gider ve bekar bir adam gelir.</code> | <code>Eğer insanlar hatırlarsa, bir sonraki seviyeye düşersin.</code> | <code>1.0</code> |
| <code>Numaramızdan biri talimatlarınızı birazdan yerine getirecektir.</code> | <code>Ekibimin bir üyesi emirlerinizi büyük bir hassasiyetle yerine getirecektir.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### x1saint
* Dataset: [x1saint](https://huggingface.co/datasets/x1saint/sts) at [85ac563](https://huggingface.co/datasets/x1saint/sts/tree/85ac563a90a8b801479ac1bc689b743574bb0e90)
* Size: 1,523 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 42.14 tokens</li><li>max: 353 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 40.23 tokens</li><li>max: 172 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.69</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-----------------|
| <code>George Orwell, 1903 yılında Hindistan'ın Bengal bölgesinde doğdu.</code> | <code>George Orwell, Montihari şehrinde doğmuştur.</code> | <code>0.8</code> |
| <code>Orwell, Eton College'de eğitimini tamamladı.</code> | <code>Orwell öğrenimini Eton College'de bitirdi.</code> | <code>1.0</code> |
| <code>George Orwell, İngiltere yönetimine karşı çıkarak Hindistan Polisi görevinden istifa etti.</code> | <code>Orwell, İmparatorluk yönetiminin iç yüzünü görünce istifayı tercih etti.</code> | <code>0.8</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Datasets
#### all-nli-pair-class
* Dataset: [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1) at [67baa14](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1/tree/67baa141cf4f6634c983d77eea193c5535611e5a)
* Size: 5,000 evaluation samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 74 tokens</li><li>mean: 420.94 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 406.85 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~44.30%</li><li>1: ~39.00%</li><li>2: ~16.70%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Davacı vekili dava dilekçesinde özetle; Davacı şirketin taşıyan sıfatıyla davalı şirkete ait yükü kendisi ile yapılan taşıma sözleşmesi uyarınca ... Limanından ... tarihinde yükleyerek .../ ... Limanı’na taşıdığını ve yükü ihtiva eden 3 adet konteyneri liman sahasına kapalı ve mühürlü olarak ... tarihinde gemiden tahliye ettiğini, ... numaralı konişmentoda belirtildiği üzere, söz konusu deniz taşıma işinde davacı şirkete ait ‘...’ numaralı 3 adet konteynerin kullanıldığını, taşıma konusu yüklere ilişkin varış ihbarlarının düzenlendiğini ve yüklerin tahliye edildiğini, bugüne dek söz konusu yüklerin teslim alınmadığını, yüklerin konişmentolarda öngörülen süre içerisinde gönderilen tarafından teslim alınmaması nedeniyle, davacı şirket tarafından yapılan bütün iyiniyetli girişimlerin sonuçsuz kaldığını, aradan geçen yaklaşık 11 aylık süre zarfında yükün teslim alınmadığını, konteynerlerin tahliye edilmediğini, konteynerlerin tahliye edilmemesi üzerine davacı taşıyan şirket çalışanı tarafı...</code> | <code>Davacı vekili dava dilekçesinde özetle; Davalı tarafın taşıyan müvekkili ... A/Ş vasıtası ile ... numaralı konişmento tahtında ... numaralı 1 adet 40'lık REEFER tip konteyner muhteviyatı yükünü Hindistan'ın Cochin Limanından Gemlik Limanı' na denizyolu ile taşıttığını, bu taşımalarda davalı yanın ithalatçı ve taşımaya ilişkin konişmentoya göre yük alıcısı konumunda olduğunu, davalının ithalatçısı ve yük alıcısı olduğu ... numaralı konişmento tahtında taşınan 1 adet 40 'lık reefer konteynerin yükleme limanı olan Hindistan' in Cochin Limanı' nda 11.07.2017 tarihinde gemiye yüklendiğini ve 28.08.2017 tarihinde Gemlik ... Limanı' nda gemiden tahliye edildiğini, davalının ... numaralı konişmento tahtında taşman emtiaları tahliye limanı olan Gemlik Limanı' na ulaşmadan önce davalıya bir örneği delil listelerinde sunulan "..." yani "Varış İhbarnamesi" gönderildiği ve davalının yükünün 28.08.2017 tarihinde Gemlik Limanı' na ulaşacağının ihbar edildiğini, tahliye limanındaki konteyner muhtevi...</code> | <code>1</code> |
| <code> Davacı vekili dava dilekçesinde özetle; Davacı ... A.Ş.'nin 1986 yılından beri Irak piyasasında iş yapan ve gerek iş ahlakı ve gerekse dürüstlüğüyle tanınan ve dolayısıyla Irak'ta yapılacak yeni bir iş olduğunda, ilk haberdar edilen bir firma olduğunu, 1989 yılında da İrak'a daimi ofisini açtığını, 2001 yılında ilgili bakanlığın davacı şirketten Saf Bakır Şerit talebinde bulunduğunu, davacının da bunu temin etmek için davalı şirketle ilişki kurduğunu, davalı şirketin Irak'ın talep ettiği spesifikasyonda mal üretecek araca sahip bulunmadığını beyan etmesi üzerine, davacı şirketin bu konuda da yardımcı olduğunu ve üretimi gerçekleştirecek makinelerin davalı tarafından teminine hem teknolojik bilgi ve hem de maddi katkıda bulunduğunu, böylelikle ilk olarak 2002 yılında, davalının ürettiği malların davacı şirket tarafından Irak'a pazarlandığını, bu arada Amerika Irak'ı istila edince, ilişkilerin bir süre askıda kaldığını ve nihayet 2006 yılında Irak Sanayi Bakanlığı'nın davacı şirketi yen...</code> | <code>Haksız rekabete ilişkin<br>bu Kısım hükümlerinin amacı, bütün katılanların menfaatine, dürüst ve bozulmamış<br>rekabetin sağlanmasıdır.Rakipler arasında veya tedarik edenlerle müşteriler<br>arasındaki ilişkileri etkileyen aldatıcı veya dürüstlük kuralına diğer şekillerdeki<br>aykırı davranışlar ile ticari uygulamalar haksız ve hukuka aykırıdır.</code> | <code>2</code> |
| <code> Davacı vekili dava dilekçesinde özetle; Müvekkili şirketin perakende sektöründe ağırlıklı olarak elektronik cihazların satışı işiyle iştigal ettiğini ve tüketiciler tarafından çeşitli şikayetlerle kendisine teslim edilen ürünleri, teknik servis olarak faaliyet gösteren belirli şirketlere onarım için yönlendirdiğini, bu lojistik faaliyetlerin zaman zaman, kargo şirketi olarak faaliyet gösteren davalı taraf ile gerçekleştirildiğini, ... A.Ş.'nin, müvekkili şirketin ticari ilişkileri kapsamında belirli ürünlerini teslim ettiği bir yetkili teknik servis olarak faaliyet gösterdiğini ve belirli cihazları onarım için teslim aldıktan sonra yine müvekkili şirkete teslim ettiğini, bu operasyonların dış lojistik tarafının da ...'nin anlaşmalı olduğu kargo şirketi olan davalı taraf ile gerçekleştirildiğini, bu ticari ilişki sebebi ile yedi adet cep telefonun da onarım için ...’ne gönderildiğini ve ...’nde işleme tabi tutulan 7 adet telefonların gönderici sıfatı ile ... tarafından müvekkili şirket...</code> | <code>Zarara, kasten veya<br>pervasızca bir davranışla ve böyle bir zararın meydana gelmesi ihtimalinin bilinciyle<br>işlenmiş bir fiilinin veya ihmalinin sebebiyet verdiği ispat edilen taşıyıcı veya<br>879 uncu maddede belirtilen kişiler, bu Kısımda öngörülen sorumluluktan kurtulma<br>hâllerinden ve sorumluluk sınırlamalarından yararlanamaz.</code> | <code>2</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/figenfikri/stsb_tr) at [bb7685b](https://huggingface.co/datasets/figenfikri/stsb_tr/tree/bb7685bff798ac1ed07d8cd08e5df43eaaeba2ee)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 45.29 tokens</li><li>max: 304 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 24.86 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Yeni haklar yeterince güzel.</code> | <code>Herkes gerçekten en yeni faydaları seviyor</code> | <code>0.5</code> |
| <code>Bu site, tüm ödül kazananların bir listesini ve Hükümet Yönetici makalelerinin aranabilir bir veritabanını içerir.</code> | <code>Web sitesinde yer alan Hükümet Yürütme makaleleri aranamaz.</code> | <code>0.0</code> |
| <code>Bilemiyorum. Onunla ilgili karışık duygularım var. Bazen ondan hoşlanıyorum ama aynı zamanda birisinin onu dövmesini görmeyi seviyorum.</code> | <code>Çoğunlukla ondan hoşlanıyorum, ama yine de birinin onu dövdüğünü görmekten zevk alıyorum.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### x1saint
* Dataset: [x1saint](https://huggingface.co/datasets/figenfikri/stsb_tr) at [bb7685b](https://huggingface.co/datasets/figenfikri/stsb_tr/tree/bb7685bff798ac1ed07d8cd08e5df43eaaeba2ee)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 45.29 tokens</li><li>max: 304 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 24.86 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Yeni haklar yeterince güzel.</code> | <code>Herkes gerçekten en yeni faydaları seviyor</code> | <code>0.5</code> |
| <code>Bu site, tüm ödül kazananların bir listesini ve Hükümet Yönetici makalelerinin aranabilir bir veritabanını içerir.</code> | <code>Web sitesinde yer alan Hükümet Yürütme makaleleri aranamaz.</code> | <code>0.0</code> |
| <code>Bilemiyorum. Onunla ilgili karışık duygularım var. Bazen ondan hoşlanıyorum ama aynı zamanda birisinin onu dövmesini görmeyi seviyorum.</code> | <code>Çoğunlukla ondan hoşlanıyorum, ama yine de birinin onu dövdüğünü görmekten zevk alıyorum.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 1e-06
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | all-nli-pair-class loss | stsb loss | x1saint loss | sts-dev_spearman_cosine |
|:------:|:-----:|:-------------:|:-----------------------:|:---------:|:------------:|:-----------------------:|
| 0.0011 | 100 | 3.5189 | - | - | - | - |
| 0.0023 | 200 | 3.0711 | - | - | - | - |
| 0.0011 | 100 | 3.5187 | - | - | - | - |
| 0.0023 | 200 | 3.0709 | - | - | - | - |
| 0.0034 | 300 | 3.2458 | - | - | - | - |
| 0.0045 | 400 | 3.1891 | - | - | - | - |
| 0.0056 | 500 | 3.3556 | - | - | - | - |
| 0.0068 | 600 | 3.4514 | - | - | - | - |
| 0.0079 | 700 | 3.2443 | - | - | - | - |
| 0.0090 | 800 | 3.2109 | - | - | - | - |
| 0.0102 | 900 | 3.4956 | - | - | - | - |
| 0.0113 | 1000 | 3.4255 | 1.0730 | 4.5456 | 4.5456 | 0.2466 |
| 0.0124 | 1100 | 3.1637 | - | - | - | - |
| 0.0136 | 1200 | 3.2261 | - | - | - | - |
| 0.0147 | 1300 | 3.3524 | - | - | - | - |
| 0.0158 | 1400 | 3.4991 | - | - | - | - |
| 0.0169 | 1500 | 3.5157 | - | - | - | - |
| 0.0181 | 1600 | 3.5079 | - | - | - | - |
| 0.0192 | 1700 | 3.2644 | - | - | - | - |
| 0.0203 | 1800 | 3.2737 | - | - | - | - |
| 0.0215 | 1900 | 3.5461 | - | - | - | - |
| 0.0226 | 2000 | 3.6754 | 1.0257 | 4.5012 | 4.5012 | 0.2563 |
| 0.0237 | 2100 | 3.414 | - | - | - | - |
| 0.0248 | 2200 | 3.0237 | - | - | - | - |
| 0.0260 | 2300 | 3.383 | - | - | - | - |
| 0.0271 | 2400 | 3.2955 | - | - | - | - |
| 0.0282 | 2500 | 3.0388 | - | - | - | - |
| 0.0294 | 2600 | 3.2 | - | - | - | - |
| 0.0305 | 2700 | 3.3309 | - | - | - | - |
| 0.0316 | 2800 | 3.0292 | - | - | - | - |
| 0.0327 | 2900 | 2.9697 | - | - | - | - |
| 0.0339 | 3000 | 2.8957 | 0.9897 | 4.4610 | 4.4610 | 0.2651 |
| 0.0350 | 3100 | 3.3987 | - | - | - | - |
| 0.0361 | 3200 | 3.0995 | - | - | - | - |
| 0.0373 | 3300 | 3.1995 | - | - | - | - |
| 0.0384 | 3400 | 3.4175 | - | - | - | - |
| 0.0395 | 3500 | 3.1195 | - | - | - | - |
| 0.0407 | 3600 | 3.1149 | - | - | - | - |
| 0.0418 | 3700 | 3.2614 | - | - | - | - |
| 0.0429 | 3800 | 3.3849 | - | - | - | - |
| 0.0440 | 3900 | 3.3391 | - | - | - | - |
| 0.0452 | 4000 | 3.1803 | 0.9553 | 4.4195 | 4.4195 | 0.2719 |
| 0.0463 | 4100 | 3.0133 | - | - | - | - |
| 0.0474 | 4200 | 3.3885 | - | - | - | - |
| 0.0486 | 4300 | 3.132 | - | - | - | - |
| 0.0497 | 4400 | 3.2 | - | - | - | - |
| 0.0508 | 4500 | 3.3284 | - | - | - | - |
| 0.0519 | 4600 | 3.1747 | - | - | - | - |
| 0.0531 | 4700 | 3.1531 | - | - | - | - |
| 0.0542 | 4800 | 3.3195 | - | - | - | - |
| 0.0553 | 4900 | 3.0077 | - | - | - | - |
| 0.0565 | 5000 | 2.7127 | 0.8501 | 4.3839 | 4.3839 | 0.2808 |
| 0.0576 | 5100 | 3.2574 | - | - | - | - |
| 0.0587 | 5200 | 3.3916 | - | - | - | - |
| 0.0598 | 5300 | 3.0803 | - | - | - | - |
| 0.0610 | 5400 | 3.3637 | - | - | - | - |
| 0.0621 | 5500 | 3.4361 | - | - | - | - |
| 0.0632 | 5600 | 3.4658 | - | - | - | - |
| 0.0644 | 5700 | 3.1167 | - | - | - | - |
| 0.0655 | 5800 | 3.3059 | - | - | - | - |
| 0.0666 | 5900 | 3.1765 | - | - | - | - |
| 0.0678 | 6000 | 3.2381 | 0.7268 | 4.3579 | 4.3579 | 0.2943 |
| 0.0689 | 6100 | 3.0319 | - | - | - | - |
| 0.0700 | 6200 | 3.2476 | - | - | - | - |
| 0.0711 | 6300 | 2.9789 | - | - | - | - |
| 0.0723 | 6400 | 3.1056 | - | - | - | - |
| 0.0734 | 6500 | 3.2808 | - | - | - | - |
| 0.0745 | 6600 | 2.9506 | - | - | - | - |
| 0.0757 | 6700 | 2.8923 | - | - | - | - |
| 0.0768 | 6800 | 3.0534 | - | - | - | - |
| 0.0779 | 6900 | 3.0781 | - | - | - | - |
| 0.0790 | 7000 | 3.3438 | 0.6398 | 4.3437 | 4.3437 | 0.3081 |
| 0.0802 | 7100 | 3.2635 | - | - | - | - |
| 0.0813 | 7200 | 3.2018 | - | - | - | - |
| 0.0824 | 7300 | 2.8889 | - | - | - | - |
| 0.0836 | 7400 | 3.4046 | - | - | - | - |
| 0.0847 | 7500 | 3.4731 | - | - | - | - |
| 0.0858 | 7600 | 3.1368 | - | - | - | - |
| 0.0869 | 7700 | 2.9244 | - | - | - | - |
| 0.0881 | 7800 | 3.1948 | - | - | - | - |
| 0.0892 | 7900 | 3.2156 | - | - | - | - |
| 0.0903 | 8000 | 2.9844 | 0.5916 | 4.3358 | 4.3358 | 0.3234 |
| 0.0915 | 8100 | 2.8774 | - | - | - | - |
| 0.0926 | 8200 | 2.5593 | - | - | - | - |
| 0.0937 | 8300 | 2.8402 | - | - | - | - |
| 0.0949 | 8400 | 3.0853 | - | - | - | - |
| 0.0960 | 8500 | 3.2655 | - | - | - | - |
| 0.0971 | 8600 | 3.1169 | - | - | - | - |
| 0.0982 | 8700 | 3.2144 | - | - | - | - |
| 0.0994 | 8800 | 2.8349 | - | - | - | - |
| 0.1005 | 8900 | 2.9291 | - | - | - | - |
| 0.1016 | 9000 | 2.7601 | 0.5400 | 4.3210 | 4.3210 | 0.3397 |
| 0.1028 | 9100 | 2.8425 | - | - | - | - |
| 0.1039 | 9200 | 3.0608 | - | - | - | - |
| 0.1050 | 9300 | 3.1085 | - | - | - | - |
| 0.1061 | 9400 | 2.9238 | - | - | - | - |
| 0.1073 | 9500 | 2.9525 | - | - | - | - |
| 0.1084 | 9600 | 3.3401 | - | - | - | - |
| 0.1095 | 9700 | 2.9262 | - | - | - | - |
| 0.1107 | 9800 | 3.1004 | - | - | - | - |
| 0.1118 | 9900 | 2.5464 | - | - | - | - |
| 0.1129 | 10000 | 3.1688 | 0.4847 | 4.3110 | 4.3110 | 0.3512 |
| 0.1141 | 10100 | 3.1941 | - | - | - | - |
| 0.1152 | 10200 | 3.0643 | - | - | - | - |
| 0.1163 | 10300 | 2.8023 | - | - | - | - |
| 0.1174 | 10400 | 3.3176 | - | - | - | - |
| 0.1186 | 10500 | 3.162 | - | - | - | - |
| 0.1197 | 10600 | 3.0185 | - | - | - | - |
| 0.1208 | 10700 | 3.0583 | - | - | - | - |
| 0.1220 | 10800 | 3.2895 | - | - | - | - |
| 0.1231 | 10900 | 2.8879 | - | - | - | - |
| 0.1242 | 11000 | 3.135 | 0.4262 | 4.3080 | 4.3080 | 0.3620 |
| 0.1253 | 11100 | 3.1176 | - | - | - | - |
| 0.1265 | 11200 | 3.0155 | - | - | - | - |
| 0.1276 | 11300 | 3.0035 | - | - | - | - |
| 0.1287 | 11400 | 3.0159 | - | - | - | - |
| 0.1299 | 11500 | 2.8225 | - | - | - | - |
| 0.1310 | 11600 | 2.9968 | - | - | - | - |
| 0.1321 | 11700 | 2.9152 | - | - | - | - |
| 0.1332 | 11800 | 3.0774 | - | - | - | - |
| 0.1344 | 11900 | 3.2168 | - | - | - | - |
| 0.1355 | 12000 | 2.7994 | 0.3985 | 4.2907 | 4.2907 | 0.3715 |
| 0.1366 | 12100 | 3.1756 | - | - | - | - |
| 0.1378 | 12200 | 3.3252 | - | - | - | - |
| 0.1389 | 12300 | 3.0435 | - | - | - | - |
| 0.1400 | 12400 | 3.0718 | - | - | - | - |
| 0.1412 | 12500 | 3.121 | - | - | - | - |
| 0.1423 | 12600 | 3.2819 | - | - | - | - |
| 0.1434 | 12700 | 3.0131 | - | - | - | - |
| 0.1445 | 12800 | 3.3347 | - | - | - | - |
| 0.1457 | 12900 | 3.228 | - | - | - | - |
| 0.1468 | 13000 | 2.9512 | 0.3903 | 4.2888 | 4.2888 | 0.3793 |
| 0.1479 | 13100 | 3.0776 | - | - | - | - |
| 0.1491 | 13200 | 2.9721 | - | - | - | - |
| 0.1502 | 13300 | 2.8265 | - | - | - | - |
| 0.1513 | 13400 | 2.9286 | - | - | - | - |
| 0.1524 | 13500 | 2.7661 | - | - | - | - |
| 0.1536 | 13600 | 2.8168 | - | - | - | - |
| 0.1547 | 13700 | 3.1262 | - | - | - | - |
| 0.1558 | 13800 | 3.1392 | - | - | - | - |
| 0.1570 | 13900 | 3.1336 | - | - | - | - |
| 0.1581 | 14000 | 3.1258 | 0.3315 | 4.2807 | 4.2807 | 0.3860 |
| 0.1592 | 14100 | 3.0987 | - | - | - | - |
| 0.1603 | 14200 | 2.7666 | - | - | - | - |
| 0.1615 | 14300 | 3.0599 | - | - | - | - |
| 0.1626 | 14400 | 3.1154 | - | - | - | - |
| 0.1637 | 14500 | 3.1234 | - | - | - | - |
| 0.1649 | 14600 | 3.025 | - | - | - | - |
| 0.1660 | 14700 | 3.0224 | - | - | - | - |
| 0.1671 | 14800 | 2.922 | - | - | - | - |
| 0.1683 | 14900 | 2.7217 | - | - | - | - |
| 0.1694 | 15000 | 2.7902 | 0.3253 | 4.2890 | 4.2890 | 0.3908 |
| 0.1705 | 15100 | 3.2199 | - | - | - | - |
| 0.1716 | 15200 | 3.1018 | - | - | - | - |
| 0.1728 | 15300 | 2.6536 | - | - | - | - |
| 0.1739 | 15400 | 3.0888 | - | - | - | - |
| 0.1750 | 15500 | 2.728 | - | - | - | - |
| 0.1762 | 15600 | 3.0917 | - | - | - | - |
| 0.1773 | 15700 | 2.9809 | - | - | - | - |
| 0.1784 | 15800 | 2.9921 | - | - | - | - |
| 0.1795 | 15900 | 3.1358 | - | - | - | - |
| 0.1807 | 16000 | 3.1537 | 0.3201 | 4.2816 | 4.2816 | 0.3950 |
| 0.1818 | 16100 | 3.0497 | - | - | - | - |
| 0.1829 | 16200 | 3.014 | - | - | - | - |
| 0.1841 | 16300 | 2.7652 | - | - | - | - |
| 0.1852 | 16400 | 2.809 | - | - | - | - |
| 0.1863 | 16500 | 3.138 | - | - | - | - |
| 0.1874 | 16600 | 2.7983 | - | - | - | - |
| 0.1886 | 16700 | 2.9568 | - | - | - | - |
| 0.1897 | 16800 | 2.9604 | - | - | - | - |
| 0.1908 | 16900 | 3.1076 | - | - | - | - |
| 0.1920 | 17000 | 3.0263 | 0.2751 | 4.2702 | 4.2702 | 0.4003 |
| 0.1931 | 17100 | 3.0295 | - | - | - | - |
| 0.1942 | 17200 | 3.1564 | - | - | - | - |
| 0.1954 | 17300 | 2.8307 | - | - | - | - |
| 0.1965 | 17400 | 3.1378 | - | - | - | - |
| 0.1976 | 17500 | 3.0607 | - | - | - | - |
| 0.1987 | 17600 | 2.8302 | - | - | - | - |
| 0.1999 | 17700 | 2.8098 | - | - | - | - |
| 0.2010 | 17800 | 3.4055 | - | - | - | - |
| 0.2021 | 17900 | 2.7756 | - | - | - | - |
| 0.2033 | 18000 | 3.0922 | 0.2955 | 4.2613 | 4.2613 | 0.4060 |
| 0.2044 | 18100 | 3.161 | - | - | - | - |
| 0.2055 | 18200 | 3.3236 | - | - | - | - |
| 0.2066 | 18300 | 2.6951 | - | - | - | - |
| 0.2078 | 18400 | 2.9456 | - | - | - | - |
| 0.2089 | 18500 | 2.7356 | - | - | - | - |
| 0.2100 | 18600 | 3.0398 | - | - | - | - |
| 0.2112 | 18700 | 2.9493 | - | - | - | - |
| 0.2123 | 18800 | 2.9966 | - | - | - | - |
| 0.2134 | 18900 | 3.3613 | - | - | - | - |
| 0.2146 | 19000 | 2.9626 | 0.2534 | 4.2668 | 4.2668 | 0.4097 |
| 0.2157 | 19100 | 3.0809 | - | - | - | - |
| 0.2168 | 19200 | 2.9583 | - | - | - | - |
| 0.2179 | 19300 | 2.9046 | - | - | - | - |
| 0.2191 | 19400 | 3.4546 | - | - | - | - |
| 0.2202 | 19500 | 3.2281 | - | - | - | - |
| 0.2213 | 19600 | 2.8041 | - | - | - | - |
| 0.2225 | 19700 | 2.7885 | - | - | - | - |
| 0.2236 | 19800 | 2.9419 | - | - | - | - |
| 0.2247 | 19900 | 2.9497 | - | - | - | - |
| 0.2258 | 20000 | 2.8604 | 0.2315 | 4.2608 | 4.2608 | 0.4136 |
| 0.2270 | 20100 | 2.897 | - | - | - | - |
| 0.2281 | 20200 | 3.0587 | - | - | - | - |
| 0.2292 | 20300 | 2.9539 | - | - | - | - |
| 0.2304 | 20400 | 3.0268 | - | - | - | - |
| 0.2315 | 20500 | 2.5965 | - | - | - | - |
| 0.2326 | 20600 | 2.5413 | - | - | - | - |
| 0.2337 | 20700 | 2.975 | - | - | - | - |
| 0.2349 | 20800 | 2.8803 | - | - | - | - |
| 0.2360 | 20900 | 2.8471 | - | - | - | - |
| 0.2371 | 21000 | 2.8503 | 0.2041 | 4.2626 | 4.2626 | 0.4157 |
| 0.2383 | 21100 | 3.0019 | - | - | - | - |
| 0.2394 | 21200 | 2.8871 | - | - | - | - |
| 0.2405 | 21300 | 2.8686 | - | - | - | - |
| 0.2417 | 21400 | 3.0021 | - | - | - | - |
| 0.2428 | 21500 | 2.9747 | - | - | - | - |
| 0.2439 | 21600 | 2.8709 | - | - | - | - |
| 0.2450 | 21700 | 3.0914 | - | - | - | - |
| 0.2462 | 21800 | 3.2664 | - | - | - | - |
| 0.2473 | 21900 | 2.7196 | - | - | - | - |
| 0.2484 | 22000 | 3.1535 | 0.2467 | 4.2663 | 4.2663 | 0.4176 |
| 0.2496 | 22100 | 2.8622 | - | - | - | - |
| 0.2507 | 22200 | 2.9969 | - | - | - | - |
| 0.2518 | 22300 | 2.53 | - | - | - | - |
| 0.2529 | 22400 | 2.4632 | - | - | - | - |
| 0.2541 | 22500 | 3.1082 | - | - | - | - |
| 0.2552 | 22600 | 2.5799 | - | - | - | - |
| 0.2563 | 22700 | 2.8729 | - | - | - | - |
| 0.2575 | 22800 | 2.8414 | - | - | - | - |
| 0.2586 | 22900 | 2.8917 | - | - | - | - |
| 0.2597 | 23000 | 2.6811 | 0.2159 | 4.2583 | 4.2583 | 0.4209 |
| 0.2608 | 23100 | 3.0415 | - | - | - | - |
| 0.2620 | 23200 | 2.8393 | - | - | - | - |
| 0.2631 | 23300 | 3.2675 | - | - | - | - |
| 0.2642 | 23400 | 2.8109 | - | - | - | - |
| 0.2654 | 23500 | 3.2762 | - | - | - | - |
| 0.2665 | 23600 | 3.0291 | - | - | - | - |
| 0.2676 | 23700 | 3.0371 | - | - | - | - |
| 0.2688 | 23800 | 2.5999 | - | - | - | - |
| 0.2699 | 23900 | 3.1188 | - | - | - | - |
| 0.2710 | 24000 | 2.548 | 0.2729 | 4.2453 | 4.2453 | 0.4242 |
| 0.2721 | 24100 | 2.8282 | - | - | - | - |
| 0.2733 | 24200 | 2.872 | - | - | - | - |
| 0.2744 | 24300 | 2.6728 | - | - | - | - |
| 0.2755 | 24400 | 3.229 | - | - | - | - |
| 0.2767 | 24500 | 2.6548 | - | - | - | - |
| 0.2778 | 24600 | 2.9694 | - | - | - | - |
| 0.2789 | 24700 | 2.6256 | - | - | - | - |
| 0.2800 | 24800 | 3.0095 | - | - | - | - |
| 0.2812 | 24900 | 3.2991 | - | - | - | - |
| 0.2823 | 25000 | 2.7506 | 0.2124 | 4.2584 | 4.2584 | 0.4249 |
| 0.2834 | 25100 | 2.7212 | - | - | - | - |
| 0.2846 | 25200 | 3.1904 | - | - | - | - |
| 0.2857 | 25300 | 2.9579 | - | - | - | - |
| 0.2868 | 25400 | 3.0365 | - | - | - | - |
| 0.2880 | 25500 | 3.053 | - | - | - | - |
| 0.2891 | 25600 | 2.9033 | - | - | - | - |
| 0.2902 | 25700 | 2.6707 | - | - | - | - |
| 0.2913 | 25800 | 2.8541 | - | - | - | - |
| 0.2925 | 25900 | 3.047 | - | - | - | - |
| 0.2936 | 26000 | 2.5607 | 0.2063 | 4.2468 | 4.2468 | 0.4281 |
| 0.2947 | 26100 | 2.9208 | - | - | - | - |
| 0.2959 | 26200 | 2.8091 | - | - | - | - |
| 0.2970 | 26300 | 3.5143 | - | - | - | - |
| 0.2981 | 26400 | 2.5564 | - | - | - | - |
| 0.2992 | 26500 | 2.8665 | - | - | - | - |
| 0.3004 | 26600 | 2.5691 | - | - | - | - |
| 0.3015 | 26700 | 2.5526 | - | - | - | - |
| 0.3026 | 26800 | 2.7084 | - | - | - | - |
| 0.3038 | 26900 | 3.1267 | - | - | - | - |
| 0.3049 | 27000 | 2.4162 | 0.1569 | 4.2439 | 4.2439 | 0.4296 |
| 0.3060 | 27100 | 2.5168 | - | - | - | - |
| 0.3071 | 27200 | 3.0819 | - | - | - | - |
| 0.3083 | 27300 | 3.0642 | - | - | - | - |
| 0.3094 | 27400 | 3.2743 | - | - | - | - |
| 0.3105 | 27500 | 2.7929 | - | - | - | - |
| 0.3117 | 27600 | 2.8661 | - | - | - | - |
| 0.3128 | 27700 | 2.9403 | - | - | - | - |
| 0.3139 | 27800 | 2.8967 | - | - | - | - |
| 0.3151 | 27900 | 2.8949 | - | - | - | - |
| 0.3162 | 28000 | 2.9087 | 0.1647 | 4.2450 | 4.2450 | 0.4316 |
| 0.3173 | 28100 | 2.7417 | - | - | - | - |
| 0.3184 | 28200 | 3.0461 | - | - | - | - |
| 0.3196 | 28300 | 2.747 | - | - | - | - |
| 0.3207 | 28400 | 2.8057 | - | - | - | - |
| 0.3218 | 28500 | 3.0305 | - | - | - | - |
| 0.3230 | 28600 | 3.1517 | - | - | - | - |
| 0.3241 | 28700 | 2.9611 | - | - | - | - |
| 0.3252 | 28800 | 2.7057 | - | - | - | - |
| 0.3263 | 28900 | 2.5268 | - | - | - | - |
| 0.3275 | 29000 | 2.9869 | 0.2016 | 4.2455 | 4.2455 | 0.4334 |
| 0.3286 | 29100 | 3.2638 | - | - | - | - |
| 0.3297 | 29200 | 2.8948 | - | - | - | - |
| 0.3309 | 29300 | 3.0118 | - | - | - | - |
| 0.3320 | 29400 | 2.8534 | - | - | - | - |
| 0.3331 | 29500 | 3.1632 | - | - | - | - |
| 0.3342 | 29600 | 2.9116 | - | - | - | - |
| 0.3354 | 29700 | 2.5557 | - | - | - | - |
| 0.3365 | 29800 | 2.7745 | - | - | - | - |
| 0.3376 | 29900 | 2.5932 | - | - | - | - |
| 0.3388 | 30000 | 2.7092 | 0.1921 | 4.2458 | 4.2458 | 0.4347 |
| 0.3399 | 30100 | 3.2183 | - | - | - | - |
| 0.3410 | 30200 | 2.857 | - | - | - | - |
| 0.3422 | 30300 | 2.9008 | - | - | - | - |
| 0.3433 | 30400 | 2.8235 | - | - | - | - |
| 0.3444 | 30500 | 2.6956 | - | - | - | - |
| 0.3455 | 30600 | 2.9611 | - | - | - | - |
| 0.3467 | 30700 | 3.1242 | - | - | - | - |
| 0.3478 | 30800 | 3.1466 | - | - | - | - |
| 0.3489 | 30900 | 2.8542 | - | - | - | - |
| 0.3501 | 31000 | 2.8809 | - | - | - | - |
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.0
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Supabase/gte-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Supabase/gte-small](https://huggingface.co/Supabase/gte-small) on the [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1), [stsb](https://huggingface.co/datasets/emrecan/all-nli-tr) and [x1saint](https://huggingface.co/datasets/x1saint/sts) datasets. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Supabase/gte-small](https://huggingface.co/Supabase/gte-small) <!-- at revision 93b36ff09519291b77d6000d2e86bd8565378086 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1)
- [stsb](https://huggingface.co/datasets/emrecan/all-nli-tr)
- [x1saint](https://huggingface.co/datasets/x1saint/sts)
- **Language:** tr
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("x1saint/gte-small-tr")
# Run inference
sentences = [
'answers-forums',
'2015',
'1017',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.427 |
| **spearman_cosine** | **0.4347** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### all-nli-pair-class
* Dataset: [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1) at [67baa14](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1/tree/67baa141cf4f6634c983d77eea193c5535611e5a)
* Size: 474,283 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 19 tokens</li><li>mean: 419.29 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 401.34 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~40.80%</li><li>1: ~42.60%</li><li>2: ~16.60%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Davacı tarafından davalı aleyhine açılan İtirazın İptali davasının mahkememizde yapılan açık yargılaması sonunda dosya incelendi. AÇILAN DAVA VE İDDİA :Davacı vekilinin dava dilekçesinde özetle; Müvekkilinin EPDK'dan (Enerji Piyasası DenetlemeKurumu) aldığı onay ile Eylül 2012 den bu yana tüm Türkiye'de elektrik enerjisi tedariki ve toptan satış hizmeti sunduğunu, davalıdan da davacı şirket ile akdettiği sözleşmeye binaen müvekkili şirkketten satın aldığı elektrik ödemelerini aksattıığı düzenlenen faturaları ödemedğinden temerrüde düştüğünü, davacı tarafından defalarca uyarılmasına rağmen de borcunu ödemedeğini bunün üzerine müvekkili İstanbul ... İcra müdürlüğünün ... Esas sayılı dosyasıda ilamsız icra takibi başlattığını davalının borca kötü niyetli olarak itiraz ettiğini ve takibin durduğunu itirazın iptali ile takibin devamına davalı hakkında haksız ve kötü niyetli irizları nedeniyle %20 den aşağı olmamak üzere icra inkar tazminatına hükmedilmesine ve yargılama gideri ile vekale...</code> | <code>Davacı vekili dava dilekçesinde özetle;Müvekkili ...'a karşı halihazırda 17/07/2018'de açılmış .... İcra Dairesi'nde ... Esas Sayılı dosya ile devam eden bir icra dosyası bulunduğunu, bu icra dosyası kapsamında 12/11/2018'den beri müvekkilinin maaşına haciz uygulandığını, dosya ödeme emrinde dosyanın dayanağı, "(Kredi kartı borcu) .... İcra-... Esas dosyalarından kaynaklanan alacağın takipte ve tahsilde tekerrür olmamak üzere tahsili talebidir." şeklinde yazıldığını, müvekkili ...'ın, 2003 yılında kimliğinin çalınarak bazı bankacılık ve telefon işlemlerinde kullanıldığını, adına kredi çekildiğini, kredi kartı çıkarıldığını, telefon hattı açıldığını ve o dönemde bu konuda şikayette bulunduğunu, ... Cumhuriyet Başsavcılığı'nca 28/01/2004 suç tarihli ... soruşturma numaralı dosyasına ulaşıldığını, bu dosyada, müvekkilinin şüpheli olarak görünmekte iken şikayetçi ...A.Ş.' olduğunu, yapılan soruşturma sonucunda gerçek şüpheli şahısların ortaya çıkarılamadığı, fakat müvekkilinin suçlu olmad...</code> | <code>0</code> |
| <code>Davacı vekili dava dilekçesinde özetle; müvekkili şirket tarafından,----işbu sözleşmeye istinaden düzenlenen ---- ait alüminyum levha emtiasının, davalı taşıyıcı şirket tarafından, ---- tarihinde, dava dışı sigortalı firmanın ------ fabrikasından yüklenildiğini, davalı taşıyıcı firmanın sorumluluğunda, --- nakli gerçekleşen toplam ---; net ağırlığı --- uygun ambalajlar ile nakledilen emtiaların, gümrük işlemleri sonrası--- alıcı şirket tarafından --- tarihinde teslim alındığı ancak teslim esnasında ------paket no’lu levhaların ıslanması sebebi ile emtianın hasara uğramış olduğu tespit edilerek taşıma senedine ihtirazi kayıt düşüldüğü ve bu levhaların hurda edilmek üzere ayrıldığını, davalı taşıyıcı şirketin sorumluluk sahasında gerçekleşen işbu hasar sonrası, bağımsız ve uzman eksper tarafından yapılan incelemelere istinaden tanzim edilmiş olan ekspertiz raporunda; hasar nedeninin, emtianın taşıyıcının sorumluluğunda bulunduğu esnada ıslanarak hasara uğramış olmasından, ıslanan paketi...</code> | <code>Davacı vekili dava dilekçesinde özetle; Müvekkili------- ------------- tarihinde davalının------ aracın çarpması nedeniyle hasara uğradığını, meydana gelen kazada davalının %100 kusurlu olduğunu, müvekkili şirket tarafından zarar gören araç için ------ hasar tazminatı ödendiğini, yapılan incelemeler neticesinde davalının sigortacısı olduğu aracın kusurlu olduğunun tespit edildiğini, kaza neticesinde ------ aracın ---- geldiğini, buna göre aracın piyasa değerinin tespit edildiğini ve tespit edilen değerin ------------ tarafından, kalan ------ ise -----tarafından ödendiğini, ayrıca, -----aracın hasarı sırasında ------ kırılması,---- durdurulamaması nedeniyle ------- hasarın tespitinin de ayrıca gerekli hale geldiğini, bu nedenle müvekkili --------- hasarının tespiti için---------------nedeniyle-------- daha ödendiğini, davalının, kusurlu --------------- nedeniyle davalı tarafa başvurulduğunu, davalı tarafın --------- hiçbir gerekçesi olmaksızın ödemediğini, müvekkili şirket tarafından 1....</code> | <code>1</code> |
| <code>Davacı vekili dava dilekçesinde özetle, müvekkili şirketin keşidecisi olduğu ----------------- Taşdelen Şubesine ait, ---- seri numaralı, 17.02.2019 vade tarihli, 50.000,00-TL bedelli çeki lehtara vermek üzere hazırlandığını ancak müvekkili şirket yetkilisinin cüzdanını kaybetmesi suretiyle çeklerin zayi olduğunu, söz konusu çeklerin kötü niyetli üçüncü kişilerin eline geçmesi halinde müvekkilinin mağdur olacağını, bu nedenle ödemeden men talimatı verilmesini ve zayi edilen çekin iptaline dair karar verilmesini talep ve dava etmiştir.</code> | <code>Davacı vekili dava dilekçesinde özetle; ... plakalı araç ... sayılı Genişletilmiş Kasko Sigortası Poliçesi ile müvekkili şirkete, sigortalı olduğunu, hadisenin, 14/06/2017 tarihinde ... plakalı aracın ... ... ... yolu üzerinde seyir halinde iken önünde seyir halinde bulunan sigortalı ... plakalı aracın trafik nedeniyle duraksaması nedeniyle duramayarak çarpması akabinde sigortalı ... plakalı aracın önünde seyir halinde bulunan ... plakalı araca, onun da önünde seyir halinde bulunan ... plakalı araca arkadan çarpması ve bu araçların sırasıyla ... aracın arkaya ... plakalı araca onun da duramayarak ... plakalı araca arkadan çarpması neticesinde çoklu maddi hasarlı trafik kazası meydana gelmiştir, Davalı/Borçlu ... sigortalısı olan ... plakalı aracın, müvekkil şirket sigortalısı olan ... Plakalı araca çarpması neticesinde maddi hasar aldığını, sigortalının, yapmış olduğu başvuru neticesinde Hasar gören sigortalı araca yaptırılan ekspertiz incelemesi sonucunda aracın hasarlı olduğunun tesp...</code> | <code>0</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/emrecan/all-nli-tr) at [daeabfb](https://huggingface.co/datasets/emrecan/all-nli-tr/tree/daeabfbc01f82757ab998bd23ce0ddfceaa5e24d)
* Size: 941,086 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 47.0 tokens</li><li>max: 301 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 25.29 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.48</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|:-----------------|
| <code>Kavramsal olarak krem kaymağının iki temel boyutu vardır - ürün ve coğrafya.</code> | <code>Ürün ve coğrafya krem kaymağını işe yarıyor.</code> | <code>0.5</code> |
| <code>Mevsim boyunca ve sanırım senin seviyendeyken onları bir sonraki seviyeye düşürürsün. Eğer ebeveyn takımını çağırmaya karar verirlerse Braves üçlü A'dan birini çağırmaya karar verirlerse çifte bir adam onun yerine geçmeye gider ve bekar bir adam gelir.</code> | <code>Eğer insanlar hatırlarsa, bir sonraki seviyeye düşersin.</code> | <code>1.0</code> |
| <code>Numaramızdan biri talimatlarınızı birazdan yerine getirecektir.</code> | <code>Ekibimin bir üyesi emirlerinizi büyük bir hassasiyetle yerine getirecektir.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### x1saint
* Dataset: [x1saint](https://huggingface.co/datasets/x1saint/sts) at [85ac563](https://huggingface.co/datasets/x1saint/sts/tree/85ac563a90a8b801479ac1bc689b743574bb0e90)
* Size: 1,523 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 42.14 tokens</li><li>max: 353 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 40.23 tokens</li><li>max: 172 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.69</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-----------------|
| <code>George Orwell, 1903 yılında Hindistan'ın Bengal bölgesinde doğdu.</code> | <code>George Orwell, Montihari şehrinde doğmuştur.</code> | <code>0.8</code> |
| <code>Orwell, Eton College'de eğitimini tamamladı.</code> | <code>Orwell öğrenimini Eton College'de bitirdi.</code> | <code>1.0</code> |
| <code>George Orwell, İngiltere yönetimine karşı çıkarak Hindistan Polisi görevinden istifa etti.</code> | <code>Orwell, İmparatorluk yönetiminin iç yüzünü görünce istifayı tercih etti.</code> | <code>0.8</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Datasets
#### all-nli-pair-class
* Dataset: [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1) at [67baa14](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1/tree/67baa141cf4f6634c983d77eea193c5535611e5a)
* Size: 5,000 evaluation samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 74 tokens</li><li>mean: 420.94 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 406.85 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~44.30%</li><li>1: ~39.00%</li><li>2: ~16.70%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Davacı vekili dava dilekçesinde özetle; Davacı şirketin taşıyan sıfatıyla davalı şirkete ait yükü kendisi ile yapılan taşıma sözleşmesi uyarınca ... Limanından ... tarihinde yükleyerek .../ ... Limanı’na taşıdığını ve yükü ihtiva eden 3 adet konteyneri liman sahasına kapalı ve mühürlü olarak ... tarihinde gemiden tahliye ettiğini, ... numaralı konişmentoda belirtildiği üzere, söz konusu deniz taşıma işinde davacı şirkete ait ‘...’ numaralı 3 adet konteynerin kullanıldığını, taşıma konusu yüklere ilişkin varış ihbarlarının düzenlendiğini ve yüklerin tahliye edildiğini, bugüne dek söz konusu yüklerin teslim alınmadığını, yüklerin konişmentolarda öngörülen süre içerisinde gönderilen tarafından teslim alınmaması nedeniyle, davacı şirket tarafından yapılan bütün iyiniyetli girişimlerin sonuçsuz kaldığını, aradan geçen yaklaşık 11 aylık süre zarfında yükün teslim alınmadığını, konteynerlerin tahliye edilmediğini, konteynerlerin tahliye edilmemesi üzerine davacı taşıyan şirket çalışanı tarafı...</code> | <code>Davacı vekili dava dilekçesinde özetle; Davalı tarafın taşıyan müvekkili ... A/Ş vasıtası ile ... numaralı konişmento tahtında ... numaralı 1 adet 40'lık REEFER tip konteyner muhteviyatı yükünü Hindistan'ın Cochin Limanından Gemlik Limanı' na denizyolu ile taşıttığını, bu taşımalarda davalı yanın ithalatçı ve taşımaya ilişkin konişmentoya göre yük alıcısı konumunda olduğunu, davalının ithalatçısı ve yük alıcısı olduğu ... numaralı konişmento tahtında taşınan 1 adet 40 'lık reefer konteynerin yükleme limanı olan Hindistan' in Cochin Limanı' nda 11.07.2017 tarihinde gemiye yüklendiğini ve 28.08.2017 tarihinde Gemlik ... Limanı' nda gemiden tahliye edildiğini, davalının ... numaralı konişmento tahtında taşman emtiaları tahliye limanı olan Gemlik Limanı' na ulaşmadan önce davalıya bir örneği delil listelerinde sunulan "..." yani "Varış İhbarnamesi" gönderildiği ve davalının yükünün 28.08.2017 tarihinde Gemlik Limanı' na ulaşacağının ihbar edildiğini, tahliye limanındaki konteyner muhtevi...</code> | <code>1</code> |
| <code> Davacı vekili dava dilekçesinde özetle; Davacı ... A.Ş.'nin 1986 yılından beri Irak piyasasında iş yapan ve gerek iş ahlakı ve gerekse dürüstlüğüyle tanınan ve dolayısıyla Irak'ta yapılacak yeni bir iş olduğunda, ilk haberdar edilen bir firma olduğunu, 1989 yılında da İrak'a daimi ofisini açtığını, 2001 yılında ilgili bakanlığın davacı şirketten Saf Bakır Şerit talebinde bulunduğunu, davacının da bunu temin etmek için davalı şirketle ilişki kurduğunu, davalı şirketin Irak'ın talep ettiği spesifikasyonda mal üretecek araca sahip bulunmadığını beyan etmesi üzerine, davacı şirketin bu konuda da yardımcı olduğunu ve üretimi gerçekleştirecek makinelerin davalı tarafından teminine hem teknolojik bilgi ve hem de maddi katkıda bulunduğunu, böylelikle ilk olarak 2002 yılında, davalının ürettiği malların davacı şirket tarafından Irak'a pazarlandığını, bu arada Amerika Irak'ı istila edince, ilişkilerin bir süre askıda kaldığını ve nihayet 2006 yılında Irak Sanayi Bakanlığı'nın davacı şirketi yen...</code> | <code>Haksız rekabete ilişkin<br>bu Kısım hükümlerinin amacı, bütün katılanların menfaatine, dürüst ve bozulmamış<br>rekabetin sağlanmasıdır.Rakipler arasında veya tedarik edenlerle müşteriler<br>arasındaki ilişkileri etkileyen aldatıcı veya dürüstlük kuralına diğer şekillerdeki<br>aykırı davranışlar ile ticari uygulamalar haksız ve hukuka aykırıdır.</code> | <code>2</code> |
| <code> Davacı vekili dava dilekçesinde özetle; Müvekkili şirketin perakende sektöründe ağırlıklı olarak elektronik cihazların satışı işiyle iştigal ettiğini ve tüketiciler tarafından çeşitli şikayetlerle kendisine teslim edilen ürünleri, teknik servis olarak faaliyet gösteren belirli şirketlere onarım için yönlendirdiğini, bu lojistik faaliyetlerin zaman zaman, kargo şirketi olarak faaliyet gösteren davalı taraf ile gerçekleştirildiğini, ... A.Ş.'nin, müvekkili şirketin ticari ilişkileri kapsamında belirli ürünlerini teslim ettiği bir yetkili teknik servis olarak faaliyet gösterdiğini ve belirli cihazları onarım için teslim aldıktan sonra yine müvekkili şirkete teslim ettiğini, bu operasyonların dış lojistik tarafının da ...'nin anlaşmalı olduğu kargo şirketi olan davalı taraf ile gerçekleştirildiğini, bu ticari ilişki sebebi ile yedi adet cep telefonun da onarım için ...’ne gönderildiğini ve ...’nde işleme tabi tutulan 7 adet telefonların gönderici sıfatı ile ... tarafından müvekkili şirket...</code> | <code>Zarara, kasten veya<br>pervasızca bir davranışla ve böyle bir zararın meydana gelmesi ihtimalinin bilinciyle<br>işlenmiş bir fiilinin veya ihmalinin sebebiyet verdiği ispat edilen taşıyıcı veya<br>879 uncu maddede belirtilen kişiler, bu Kısımda öngörülen sorumluluktan kurtulma<br>hâllerinden ve sorumluluk sınırlamalarından yararlanamaz.</code> | <code>2</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/figenfikri/stsb_tr) at [bb7685b](https://huggingface.co/datasets/figenfikri/stsb_tr/tree/bb7685bff798ac1ed07d8cd08e5df43eaaeba2ee)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 45.29 tokens</li><li>max: 304 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 24.86 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Yeni haklar yeterince güzel.</code> | <code>Herkes gerçekten en yeni faydaları seviyor</code> | <code>0.5</code> |
| <code>Bu site, tüm ödül kazananların bir listesini ve Hükümet Yönetici makalelerinin aranabilir bir veritabanını içerir.</code> | <code>Web sitesinde yer alan Hükümet Yürütme makaleleri aranamaz.</code> | <code>0.0</code> |
| <code>Bilemiyorum. Onunla ilgili karışık duygularım var. Bazen ondan hoşlanıyorum ama aynı zamanda birisinin onu dövmesini görmeyi seviyorum.</code> | <code>Çoğunlukla ondan hoşlanıyorum, ama yine de birinin onu dövdüğünü görmekten zevk alıyorum.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### x1saint
* Dataset: [x1saint](https://huggingface.co/datasets/figenfikri/stsb_tr) at [bb7685b](https://huggingface.co/datasets/figenfikri/stsb_tr/tree/bb7685bff798ac1ed07d8cd08e5df43eaaeba2ee)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 45.29 tokens</li><li>max: 304 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 24.86 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Yeni haklar yeterince güzel.</code> | <code>Herkes gerçekten en yeni faydaları seviyor</code> | <code>0.5</code> |
| <code>Bu site, tüm ödül kazananların bir listesini ve Hükümet Yönetici makalelerinin aranabilir bir veritabanını içerir.</code> | <code>Web sitesinde yer alan Hükümet Yürütme makaleleri aranamaz.</code> | <code>0.0</code> |
| <code>Bilemiyorum. Onunla ilgili karışık duygularım var. Bazen ondan hoşlanıyorum ama aynı zamanda birisinin onu dövmesini görmeyi seviyorum.</code> | <code>Çoğunlukla ondan hoşlanıyorum, ama yine de birinin onu dövdüğünü görmekten zevk alıyorum.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 1e-06
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | all-nli-pair-class loss | stsb loss | x1saint loss | sts-dev_spearman_cosine |
|:------:|:-----:|:-------------:|:-----------------------:|:---------:|:------------:|:-----------------------:|
| 0.0011 | 100 | 3.5189 | - | - | - | - |
| 0.0023 | 200 | 3.0711 | - | - | - | - |
| 0.0011 | 100 | 3.5187 | - | - | - | - |
| 0.0023 | 200 | 3.0709 | - | - | - | - |
| 0.0034 | 300 | 3.2458 | - | - | - | - |
| 0.0045 | 400 | 3.1891 | - | - | - | - |
| 0.0056 | 500 | 3.3556 | - | - | - | - |
| 0.0068 | 600 | 3.4514 | - | - | - | - |
| 0.0079 | 700 | 3.2443 | - | - | - | - |
| 0.0090 | 800 | 3.2109 | - | - | - | - |
| 0.0102 | 900 | 3.4956 | - | - | - | - |
| 0.0113 | 1000 | 3.4255 | 1.0730 | 4.5456 | 4.5456 | 0.2466 |
| 0.0124 | 1100 | 3.1637 | - | - | - | - |
| 0.0136 | 1200 | 3.2261 | - | - | - | - |
| 0.0147 | 1300 | 3.3524 | - | - | - | - |
| 0.0158 | 1400 | 3.4991 | - | - | - | - |
| 0.0169 | 1500 | 3.5157 | - | - | - | - |
| 0.0181 | 1600 | 3.5079 | - | - | - | - |
| 0.0192 | 1700 | 3.2644 | - | - | - | - |
| 0.0203 | 1800 | 3.2737 | - | - | - | - |
| 0.0215 | 1900 | 3.5461 | - | - | - | - |
| 0.0226 | 2000 | 3.6754 | 1.0257 | 4.5012 | 4.5012 | 0.2563 |
| 0.0237 | 2100 | 3.414 | - | - | - | - |
| 0.0248 | 2200 | 3.0237 | - | - | - | - |
| 0.0260 | 2300 | 3.383 | - | - | - | - |
| 0.0271 | 2400 | 3.2955 | - | - | - | - |
| 0.0282 | 2500 | 3.0388 | - | - | - | - |
| 0.0294 | 2600 | 3.2 | - | - | - | - |
| 0.0305 | 2700 | 3.3309 | - | - | - | - |
| 0.0316 | 2800 | 3.0292 | - | - | - | - |
| 0.0327 | 2900 | 2.9697 | - | - | - | - |
| 0.0339 | 3000 | 2.8957 | 0.9897 | 4.4610 | 4.4610 | 0.2651 |
| 0.0350 | 3100 | 3.3987 | - | - | - | - |
| 0.0361 | 3200 | 3.0995 | - | - | - | - |
| 0.0373 | 3300 | 3.1995 | - | - | - | - |
| 0.0384 | 3400 | 3.4175 | - | - | - | - |
| 0.0395 | 3500 | 3.1195 | - | - | - | - |
| 0.0407 | 3600 | 3.1149 | - | - | - | - |
| 0.0418 | 3700 | 3.2614 | - | - | - | - |
| 0.0429 | 3800 | 3.3849 | - | - | - | - |
| 0.0440 | 3900 | 3.3391 | - | - | - | - |
| 0.0452 | 4000 | 3.1803 | 0.9553 | 4.4195 | 4.4195 | 0.2719 |
| 0.0463 | 4100 | 3.0133 | - | - | - | - |
| 0.0474 | 4200 | 3.3885 | - | - | - | - |
| 0.0486 | 4300 | 3.132 | - | - | - | - |
| 0.0497 | 4400 | 3.2 | - | - | - | - |
| 0.0508 | 4500 | 3.3284 | - | - | - | - |
| 0.0519 | 4600 | 3.1747 | - | - | - | - |
| 0.0531 | 4700 | 3.1531 | - | - | - | - |
| 0.0542 | 4800 | 3.3195 | - | - | - | - |
| 0.0553 | 4900 | 3.0077 | - | - | - | - |
| 0.0565 | 5000 | 2.7127 | 0.8501 | 4.3839 | 4.3839 | 0.2808 |
| 0.0576 | 5100 | 3.2574 | - | - | - | - |
| 0.0587 | 5200 | 3.3916 | - | - | - | - |
| 0.0598 | 5300 | 3.0803 | - | - | - | - |
| 0.0610 | 5400 | 3.3637 | - | - | - | - |
| 0.0621 | 5500 | 3.4361 | - | - | - | - |
| 0.0632 | 5600 | 3.4658 | - | - | - | - |
| 0.0644 | 5700 | 3.1167 | - | - | - | - |
| 0.0655 | 5800 | 3.3059 | - | - | - | - |
| 0.0666 | 5900 | 3.1765 | - | - | - | - |
| 0.0678 | 6000 | 3.2381 | 0.7268 | 4.3579 | 4.3579 | 0.2943 |
| 0.0689 | 6100 | 3.0319 | - | - | - | - |
| 0.0700 | 6200 | 3.2476 | - | - | - | - |
| 0.0711 | 6300 | 2.9789 | - | - | - | - |
| 0.0723 | 6400 | 3.1056 | - | - | - | - |
| 0.0734 | 6500 | 3.2808 | - | - | - | - |
| 0.0745 | 6600 | 2.9506 | - | - | - | - |
| 0.0757 | 6700 | 2.8923 | - | - | - | - |
| 0.0768 | 6800 | 3.0534 | - | - | - | - |
| 0.0779 | 6900 | 3.0781 | - | - | - | - |
| 0.0790 | 7000 | 3.3438 | 0.6398 | 4.3437 | 4.3437 | 0.3081 |
| 0.0802 | 7100 | 3.2635 | - | - | - | - |
| 0.0813 | 7200 | 3.2018 | - | - | - | - |
| 0.0824 | 7300 | 2.8889 | - | - | - | - |
| 0.0836 | 7400 | 3.4046 | - | - | - | - |
| 0.0847 | 7500 | 3.4731 | - | - | - | - |
| 0.0858 | 7600 | 3.1368 | - | - | - | - |
| 0.0869 | 7700 | 2.9244 | - | - | - | - |
| 0.0881 | 7800 | 3.1948 | - | - | - | - |
| 0.0892 | 7900 | 3.2156 | - | - | - | - |
| 0.0903 | 8000 | 2.9844 | 0.5916 | 4.3358 | 4.3358 | 0.3234 |
| 0.0915 | 8100 | 2.8774 | - | - | - | - |
| 0.0926 | 8200 | 2.5593 | - | - | - | - |
| 0.0937 | 8300 | 2.8402 | - | - | - | - |
| 0.0949 | 8400 | 3.0853 | - | - | - | - |
| 0.0960 | 8500 | 3.2655 | - | - | - | - |
| 0.0971 | 8600 | 3.1169 | - | - | - | - |
| 0.0982 | 8700 | 3.2144 | - | - | - | - |
| 0.0994 | 8800 | 2.8349 | - | - | - | - |
| 0.1005 | 8900 | 2.9291 | - | - | - | - |
| 0.1016 | 9000 | 2.7601 | 0.5400 | 4.3210 | 4.3210 | 0.3397 |
| 0.1028 | 9100 | 2.8425 | - | - | - | - |
| 0.1039 | 9200 | 3.0608 | - | - | - | - |
| 0.1050 | 9300 | 3.1085 | - | - | - | - |
| 0.1061 | 9400 | 2.9238 | - | - | - | - |
| 0.1073 | 9500 | 2.9525 | - | - | - | - |
| 0.1084 | 9600 | 3.3401 | - | - | - | - |
| 0.1095 | 9700 | 2.9262 | - | - | - | - |
| 0.1107 | 9800 | 3.1004 | - | - | - | - |
| 0.1118 | 9900 | 2.5464 | - | - | - | - |
| 0.1129 | 10000 | 3.1688 | 0.4847 | 4.3110 | 4.3110 | 0.3512 |
| 0.1141 | 10100 | 3.1941 | - | - | - | - |
| 0.1152 | 10200 | 3.0643 | - | - | - | - |
| 0.1163 | 10300 | 2.8023 | - | - | - | - |
| 0.1174 | 10400 | 3.3176 | - | - | - | - |
| 0.1186 | 10500 | 3.162 | - | - | - | - |
| 0.1197 | 10600 | 3.0185 | - | - | - | - |
| 0.1208 | 10700 | 3.0583 | - | - | - | - |
| 0.1220 | 10800 | 3.2895 | - | - | - | - |
| 0.1231 | 10900 | 2.8879 | - | - | - | - |
| 0.1242 | 11000 | 3.135 | 0.4262 | 4.3080 | 4.3080 | 0.3620 |
| 0.1253 | 11100 | 3.1176 | - | - | - | - |
| 0.1265 | 11200 | 3.0155 | - | - | - | - |
| 0.1276 | 11300 | 3.0035 | - | - | - | - |
| 0.1287 | 11400 | 3.0159 | - | - | - | - |
| 0.1299 | 11500 | 2.8225 | - | - | - | - |
| 0.1310 | 11600 | 2.9968 | - | - | - | - |
| 0.1321 | 11700 | 2.9152 | - | - | - | - |
| 0.1332 | 11800 | 3.0774 | - | - | - | - |
| 0.1344 | 11900 | 3.2168 | - | - | - | - |
| 0.1355 | 12000 | 2.7994 | 0.3985 | 4.2907 | 4.2907 | 0.3715 |
| 0.1366 | 12100 | 3.1756 | - | - | - | - |
| 0.1378 | 12200 | 3.3252 | - | - | - | - |
| 0.1389 | 12300 | 3.0435 | - | - | - | - |
| 0.1400 | 12400 | 3.0718 | - | - | - | - |
| 0.1412 | 12500 | 3.121 | - | - | - | - |
| 0.1423 | 12600 | 3.2819 | - | - | - | - |
| 0.1434 | 12700 | 3.0131 | - | - | - | - |
| 0.1445 | 12800 | 3.3347 | - | - | - | - |
| 0.1457 | 12900 | 3.228 | - | - | - | - |
| 0.1468 | 13000 | 2.9512 | 0.3903 | 4.2888 | 4.2888 | 0.3793 |
| 0.1479 | 13100 | 3.0776 | - | - | - | - |
| 0.1491 | 13200 | 2.9721 | - | - | - | - |
| 0.1502 | 13300 | 2.8265 | - | - | - | - |
| 0.1513 | 13400 | 2.9286 | - | - | - | - |
| 0.1524 | 13500 | 2.7661 | - | - | - | - |
| 0.1536 | 13600 | 2.8168 | - | - | - | - |
| 0.1547 | 13700 | 3.1262 | - | - | - | - |
| 0.1558 | 13800 | 3.1392 | - | - | - | - |
| 0.1570 | 13900 | 3.1336 | - | - | - | - |
| 0.1581 | 14000 | 3.1258 | 0.3315 | 4.2807 | 4.2807 | 0.3860 |
| 0.1592 | 14100 | 3.0987 | - | - | - | - |
| 0.1603 | 14200 | 2.7666 | - | - | - | - |
| 0.1615 | 14300 | 3.0599 | - | - | - | - |
| 0.1626 | 14400 | 3.1154 | - | - | - | - |
| 0.1637 | 14500 | 3.1234 | - | - | - | - |
| 0.1649 | 14600 | 3.025 | - | - | - | - |
| 0.1660 | 14700 | 3.0224 | - | - | - | - |
| 0.1671 | 14800 | 2.922 | - | - | - | - |
| 0.1683 | 14900 | 2.7217 | - | - | - | - |
| 0.1694 | 15000 | 2.7902 | 0.3253 | 4.2890 | 4.2890 | 0.3908 |
| 0.1705 | 15100 | 3.2199 | - | - | - | - |
| 0.1716 | 15200 | 3.1018 | - | - | - | - |
| 0.1728 | 15300 | 2.6536 | - | - | - | - |
| 0.1739 | 15400 | 3.0888 | - | - | - | - |
| 0.1750 | 15500 | 2.728 | - | - | - | - |
| 0.1762 | 15600 | 3.0917 | - | - | - | - |
| 0.1773 | 15700 | 2.9809 | - | - | - | - |
| 0.1784 | 15800 | 2.9921 | - | - | - | - |
| 0.1795 | 15900 | 3.1358 | - | - | - | - |
| 0.1807 | 16000 | 3.1537 | 0.3201 | 4.2816 | 4.2816 | 0.3950 |
| 0.1818 | 16100 | 3.0497 | - | - | - | - |
| 0.1829 | 16200 | 3.014 | - | - | - | - |
| 0.1841 | 16300 | 2.7652 | - | - | - | - |
| 0.1852 | 16400 | 2.809 | - | - | - | - |
| 0.1863 | 16500 | 3.138 | - | - | - | - |
| 0.1874 | 16600 | 2.7983 | - | - | - | - |
| 0.1886 | 16700 | 2.9568 | - | - | - | - |
| 0.1897 | 16800 | 2.9604 | - | - | - | - |
| 0.1908 | 16900 | 3.1076 | - | - | - | - |
| 0.1920 | 17000 | 3.0263 | 0.2751 | 4.2702 | 4.2702 | 0.4003 |
| 0.1931 | 17100 | 3.0295 | - | - | - | - |
| 0.1942 | 17200 | 3.1564 | - | - | - | - |
| 0.1954 | 17300 | 2.8307 | - | - | - | - |
| 0.1965 | 17400 | 3.1378 | - | - | - | - |
| 0.1976 | 17500 | 3.0607 | - | - | - | - |
| 0.1987 | 17600 | 2.8302 | - | - | - | - |
| 0.1999 | 17700 | 2.8098 | - | - | - | - |
| 0.2010 | 17800 | 3.4055 | - | - | - | - |
| 0.2021 | 17900 | 2.7756 | - | - | - | - |
| 0.2033 | 18000 | 3.0922 | 0.2955 | 4.2613 | 4.2613 | 0.4060 |
| 0.2044 | 18100 | 3.161 | - | - | - | - |
| 0.2055 | 18200 | 3.3236 | - | - | - | - |
| 0.2066 | 18300 | 2.6951 | - | - | - | - |
| 0.2078 | 18400 | 2.9456 | - | - | - | - |
| 0.2089 | 18500 | 2.7356 | - | - | - | - |
| 0.2100 | 18600 | 3.0398 | - | - | - | - |
| 0.2112 | 18700 | 2.9493 | - | - | - | - |
| 0.2123 | 18800 | 2.9966 | - | - | - | - |
| 0.2134 | 18900 | 3.3613 | - | - | - | - |
| 0.2146 | 19000 | 2.9626 | 0.2534 | 4.2668 | 4.2668 | 0.4097 |
| 0.2157 | 19100 | 3.0809 | - | - | - | - |
| 0.2168 | 19200 | 2.9583 | - | - | - | - |
| 0.2179 | 19300 | 2.9046 | - | - | - | - |
| 0.2191 | 19400 | 3.4546 | - | - | - | - |
| 0.2202 | 19500 | 3.2281 | - | - | - | - |
| 0.2213 | 19600 | 2.8041 | - | - | - | - |
| 0.2225 | 19700 | 2.7885 | - | - | - | - |
| 0.2236 | 19800 | 2.9419 | - | - | - | - |
| 0.2247 | 19900 | 2.9497 | - | - | - | - |
| 0.2258 | 20000 | 2.8604 | 0.2315 | 4.2608 | 4.2608 | 0.4136 |
| 0.2270 | 20100 | 2.897 | - | - | - | - |
| 0.2281 | 20200 | 3.0587 | - | - | - | - |
| 0.2292 | 20300 | 2.9539 | - | - | - | - |
| 0.2304 | 20400 | 3.0268 | - | - | - | - |
| 0.2315 | 20500 | 2.5965 | - | - | - | - |
| 0.2326 | 20600 | 2.5413 | - | - | - | - |
| 0.2337 | 20700 | 2.975 | - | - | - | - |
| 0.2349 | 20800 | 2.8803 | - | - | - | - |
| 0.2360 | 20900 | 2.8471 | - | - | - | - |
| 0.2371 | 21000 | 2.8503 | 0.2041 | 4.2626 | 4.2626 | 0.4157 |
| 0.2383 | 21100 | 3.0019 | - | - | - | - |
| 0.2394 | 21200 | 2.8871 | - | - | - | - |
| 0.2405 | 21300 | 2.8686 | - | - | - | - |
| 0.2417 | 21400 | 3.0021 | - | - | - | - |
| 0.2428 | 21500 | 2.9747 | - | - | - | - |
| 0.2439 | 21600 | 2.8709 | - | - | - | - |
| 0.2450 | 21700 | 3.0914 | - | - | - | - |
| 0.2462 | 21800 | 3.2664 | - | - | - | - |
| 0.2473 | 21900 | 2.7196 | - | - | - | - |
| 0.2484 | 22000 | 3.1535 | 0.2467 | 4.2663 | 4.2663 | 0.4176 |
| 0.2496 | 22100 | 2.8622 | - | - | - | - |
| 0.2507 | 22200 | 2.9969 | - | - | - | - |
| 0.2518 | 22300 | 2.53 | - | - | - | - |
| 0.2529 | 22400 | 2.4632 | - | - | - | - |
| 0.2541 | 22500 | 3.1082 | - | - | - | - |
| 0.2552 | 22600 | 2.5799 | - | - | - | - |
| 0.2563 | 22700 | 2.8729 | - | - | - | - |
| 0.2575 | 22800 | 2.8414 | - | - | - | - |
| 0.2586 | 22900 | 2.8917 | - | - | - | - |
| 0.2597 | 23000 | 2.6811 | 0.2159 | 4.2583 | 4.2583 | 0.4209 |
| 0.2608 | 23100 | 3.0415 | - | - | - | - |
| 0.2620 | 23200 | 2.8393 | - | - | - | - |
| 0.2631 | 23300 | 3.2675 | - | - | - | - |
| 0.2642 | 23400 | 2.8109 | - | - | - | - |
| 0.2654 | 23500 | 3.2762 | - | - | - | - |
| 0.2665 | 23600 | 3.0291 | - | - | - | - |
| 0.2676 | 23700 | 3.0371 | - | - | - | - |
| 0.2688 | 23800 | 2.5999 | - | - | - | - |
| 0.2699 | 23900 | 3.1188 | - | - | - | - |
| 0.2710 | 24000 | 2.548 | 0.2729 | 4.2453 | 4.2453 | 0.4242 |
| 0.2721 | 24100 | 2.8282 | - | - | - | - |
| 0.2733 | 24200 | 2.872 | - | - | - | - |
| 0.2744 | 24300 | 2.6728 | - | - | - | - |
| 0.2755 | 24400 | 3.229 | - | - | - | - |
| 0.2767 | 24500 | 2.6548 | - | - | - | - |
| 0.2778 | 24600 | 2.9694 | - | - | - | - |
| 0.2789 | 24700 | 2.6256 | - | - | - | - |
| 0.2800 | 24800 | 3.0095 | - | - | - | - |
| 0.2812 | 24900 | 3.2991 | - | - | - | - |
| 0.2823 | 25000 | 2.7506 | 0.2124 | 4.2584 | 4.2584 | 0.4249 |
| 0.2834 | 25100 | 2.7212 | - | - | - | - |
| 0.2846 | 25200 | 3.1904 | - | - | - | - |
| 0.2857 | 25300 | 2.9579 | - | - | - | - |
| 0.2868 | 25400 | 3.0365 | - | - | - | - |
| 0.2880 | 25500 | 3.053 | - | - | - | - |
| 0.2891 | 25600 | 2.9033 | - | - | - | - |
| 0.2902 | 25700 | 2.6707 | - | - | - | - |
| 0.2913 | 25800 | 2.8541 | - | - | - | - |
| 0.2925 | 25900 | 3.047 | - | - | - | - |
| 0.2936 | 26000 | 2.5607 | 0.2063 | 4.2468 | 4.2468 | 0.4281 |
| 0.2947 | 26100 | 2.9208 | - | - | - | - |
| 0.2959 | 26200 | 2.8091 | - | - | - | - |
| 0.2970 | 26300 | 3.5143 | - | - | - | - |
| 0.2981 | 26400 | 2.5564 | - | - | - | - |
| 0.2992 | 26500 | 2.8665 | - | - | - | - |
| 0.3004 | 26600 | 2.5691 | - | - | - | - |
| 0.3015 | 26700 | 2.5526 | - | - | - | - |
| 0.3026 | 26800 | 2.7084 | - | - | - | - |
| 0.3038 | 26900 | 3.1267 | - | - | - | - |
| 0.3049 | 27000 | 2.4162 | 0.1569 | 4.2439 | 4.2439 | 0.4296 |
| 0.3060 | 27100 | 2.5168 | - | - | - | - |
| 0.3071 | 27200 | 3.0819 | - | - | - | - |
| 0.3083 | 27300 | 3.0642 | - | - | - | - |
| 0.3094 | 27400 | 3.2743 | - | - | - | - |
| 0.3105 | 27500 | 2.7929 | - | - | - | - |
| 0.3117 | 27600 | 2.8661 | - | - | - | - |
| 0.3128 | 27700 | 2.9403 | - | - | - | - |
| 0.3139 | 27800 | 2.8967 | - | - | - | - |
| 0.3151 | 27900 | 2.8949 | - | - | - | - |
| 0.3162 | 28000 | 2.9087 | 0.1647 | 4.2450 | 4.2450 | 0.4316 |
| 0.3173 | 28100 | 2.7417 | - | - | - | - |
| 0.3184 | 28200 | 3.0461 | - | - | - | - |
| 0.3196 | 28300 | 2.747 | - | - | - | - |
| 0.3207 | 28400 | 2.8057 | - | - | - | - |
| 0.3218 | 28500 | 3.0305 | - | - | - | - |
| 0.3230 | 28600 | 3.1517 | - | - | - | - |
| 0.3241 | 28700 | 2.9611 | - | - | - | - |
| 0.3252 | 28800 | 2.7057 | - | - | - | - |
| 0.3263 | 28900 | 2.5268 | - | - | - | - |
| 0.3275 | 29000 | 2.9869 | 0.2016 | 4.2455 | 4.2455 | 0.4334 |
| 0.3286 | 29100 | 3.2638 | - | - | - | - |
| 0.3297 | 29200 | 2.8948 | - | - | - | - |
| 0.3309 | 29300 | 3.0118 | - | - | - | - |
| 0.3320 | 29400 | 2.8534 | - | - | - | - |
| 0.3331 | 29500 | 3.1632 | - | - | - | - |
| 0.3342 | 29600 | 2.9116 | - | - | - | - |
| 0.3354 | 29700 | 2.5557 | - | - | - | - |
| 0.3365 | 29800 | 2.7745 | - | - | - | - |
| 0.3376 | 29900 | 2.5932 | - | - | - | - |
| 0.3388 | 30000 | 2.7092 | 0.1921 | 4.2458 | 4.2458 | 0.4347 |
| 0.3399 | 30100 | 3.2183 | - | - | - | - |
| 0.3410 | 30200 | 2.857 | - | - | - | - |
| 0.3422 | 30300 | 2.9008 | - | - | - | - |
| 0.3433 | 30400 | 2.8235 | - | - | - | - |
| 0.3444 | 30500 | 2.6956 | - | - | - | - |
| 0.3455 | 30600 | 2.9611 | - | - | - | - |
| 0.3467 | 30700 | 3.1242 | - | - | - | - |
| 0.3478 | 30800 | 3.1466 | - | - | - | - |
| 0.3489 | 30900 | 2.8542 | - | - | - | - |
| 0.3501 | 31000 | 2.8809 | - | - | - | - |
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.0
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Supabase/gte-small", "datasets": ["Turkish-NLI/legal_nli_TR_V1", "emrecan/all-nli-tr", "x1saint/sts", "figenfikri/stsb_tr"], "language": ["tr"], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1416892", "loss:SoftmaxLoss", "loss:CoSENTLoss"], "widget": [{"source_sentence": "answers-forums", "sentences": ["main-forums", "2015", "\"Yaklaşan kozmik dinlenme çerçevesine göre ... 371 km / s hızla Aslan takımyıldızına doğru\" hareket ediyoruz.", "0117", "Başka bir nesneye göre olmayan bir 'hareketsiz' yoktur.", "0.80"]}, {"source_sentence": "\tDavacı vekili dava dilekçelerinde özetle; müvekkili tarafından taraflar arasındaki ticari ilişkiden kaynaklanan faturalar nedeniyle davalının müvekkiline toplam 46.991,00 TL borcu bulunduğunu, borcun ödenmemesi üzerine davalı aleyhine Ankara ... Müdürlüğünün 2019/15322 sayılı takip dosyası ile icra takibi başlatıldığını, davalının kötü niyetli olarak takibe itirazı üzerine takibin durduğunu, itirazın haklı nedenlere dayanmadığını belirterek itirazın iptaline, takibin devamına, %20'den aşağı olmamak üzere icra inkar tazminatına hükmedilmesine karar verilmesini talep ve dava etmiştir. ", "sentences": ["Davacı vekili dava dilekçesinde özetle; davalı şirket ile müvekkili arasında ... E Blok adresindeki ofisin alçıpan, asma tavan, bölme duvar, giydirme duvarı ve akustik alçıpan montajlarının eksiksiz ve tam olarak tamamlanması hususunda 02/05/2015 tarihinde montaj sözleşmesi imzalandığını, müvekkilinin sözleşme hükümlerini yerine getirilerek montaj işlemlerini tamamladığını, ... E Blok adresindeki ofisin alçıpan asma tavana, bölme duvar, giydirme duvar ve akustik alçıpan montajlarının karşılığı olarak ödenmesi gereken hakkediş bedeli olan 20.000,00 TL'nin ödenmediğini, müvekkiline yaptığı işin karşılığı olarak ödemenin eksik yapıldığından davalı aleyhine ... Müdürlüğünün ... sayılı dosyası ile takip başlatıldığını, davalının icra takibine itirazı üzerine takibin durduğunu, itirazın haklı nedenlere dayanmadığını belirterek davalının borca ve yetkiye itirazının iptaline, takibin devamına, %20'en az olmamak üzere icra inkar tazminatına hükmedilmesine karar verilmesini talep ve dava etmiştir. ", "Davacı vekili dava dilekçesinde özetle, müvekkili şirketin eser sözleşmesi kapsamında keşidecisi ... Dekorasyon ve Elektrik Ltd.Şti., 8012249 çek nolu 900.000,00 TL meblağlı çek verildiğini, çekin müvekkili firma uhdesinde iken kaybolduğunu belirterek bu çek üzerine ödeme yasağı konulmasına ve dava konusu çek hakkında zayii belgesi verilmesini talep ve dava etmiştir.", "Davacı vekili dava dilekçesinde özetle; Davacı şirket merkezine üçüncü şahıs tarafından usulsüz haciz uygulandığını, davacı şirket adresine gelen ... A.Ş. firması yetkilileri büroya geldikten sonra haciz mahallini kendileri çilingir vasıtasıyla açtığını, bu uygulama sırasında hiçbir yetkili ya da üçüncü şahıs yokken büro eşyalarının haczedildiğini ve muhafaza altına alındığını, bir kısım kıymetli evrak, defter ve kayıtların da zayi edildiğini, işbu nedenle ticari işletmede bulunan belgelerin zayi olduğunu, Türk Ticaret Kanunu’nun 82/7. Ve sair maddeleri çerçevesinde davacı tarafa zayi belgesi verilmesini talep ve dava etmiştir. "]}, {"source_sentence": "Davacı vekili dava dilekçesinde özetle; .... Tİc. Ltd. Şirketi tarafından keşide edilen 08.05.2020 tarih, 25.000TL bedelli ve ... ... Cad. Şubesine ait ... numaralı çek, .... Tİc. Ltd. Şirketi tarafından keşide edilen 13.06.2020 tarihli ve 65.000TL bedelli, ... ... Cad. Şubesine ait ... numaralı çekleri müvekkili alışveriş esnasında kredi kartını kullanması sebebiyle cüzdanın yanında olduğunu hatırladığını, eve geldiğinde cüzdanını bulamadığını, cüzdanında kredi kartları ve bir miktar para ile çekleri kaybettiğini belirterek çeklerin iptaline karar verilmesini talep ile dava etmiştir.", "sentences": [" Borçlu, devri öğrendiği sırada devredene karşı\nsahip olduğu savunmaları, devralana karşı da ileri sürebilir.\nBorçlu, devri öğrendiği anda muaccel olmayan alacağını, devredilen\nalacaktan önce veya onunla aynı anda muaccel olması koşuluyla borcu ile takas\nedebilir. ", "Davacı vekili dava dilekçesinde özetle; Müvekkilinin hamili olduğu ------ Şubesi'ne ait, keşidecisinin ---------. olduğu,---- seri/çek no'lu ------ tutarındaki----------- keşide tarihli çek zayi olduğunu, müvekkilin telafisi güç ve hatta imkansız zararlara uğramaması için ihtiyati tedbir kararı verilerek çekin ödenmemesinin durdurulmasına ve davaya konu çekin iptaline karar verilmesini talep ve dava etmiştir. ", " Davacı vekili dava dilekçesinde özetle; dava dışı sigortalı ... A.Ş.'ye ait, müvekkili sigorta şirketinden Kasko poliçesi ile sigortalı ... plakalı aracın, ... tarafından oto yıkama hizmeti almak üzere davalıya ait işyerine 09.12.2013 günü bırakıldığını ve işyerinden çalındığını, iş bu olay üzerine araç kullanıcısı tarafından Göktürk Polis Merkezine başvurulduğunu, aracın bulunamaması üzerine müvekkili sigorta şirketi tarafından aracın rayiç değeri olarak tespit edilen 200.000,00 TL'nin sigortalıya 04.06.2014 tarihinde ödendiğini ve TTK 1472. maddesinde açıklanan halefiyet kuralı gereği sigortalısına yaptığı ödemeyi davalı taraftan talep ettiğini ancak davalı taraftan herhangi bir cevap alamadığını, alacağın tahsili için İstanbul ...icra Müdürlüğünün ... E. Sayılı dosyası ile girişilen takibe davalı borçlunun borca itirazı nedeniyle itiraz edilen alacak miktarı için itirazın iptaline, takibin devamına, davalının %20'den aşağı olmamak kaydıyla icra inkar tazminatına mahkum edilmesine karar verilmesini dava ve talep etmiştir. Davalı vekili cevap dilekçesinde özetle; Olayın meydana geldiği yerin müvekkili şirkete ait oto yıkama faaliyetinin yapıldığı iş yeri olduğunu, olayın yıkama için bırakılan ... plakalı aracın gasp edilerek çalınması ile meydana geldiğini, olayın faillerinin belli olduğunu takipte yer alan diğer borçlular olduğunu, İstanbul 14.Ağır Ceza Mahkemesi 2015/201 E.sayılı dosyası ile dava açıldığını, olayın gerçekleşme şekli itibari ile müvekkilinin ve işyerinde yıkama faaliyetinde bulunan çalışanların kusur ve ihmali söz konusu olmadığını, 3. kişilerce gasp edilmek suretiyle çalınan araç için ödenen bedelin taraflarından rücuen talep edilmesinin mümkün olmadığını belirterek davanın reddine karar verilmesini talep etmiştir. Mahkemece yapılan yargılama sonucunda, \"Davanın kısmen kabulü, kısmen reddi ile, Davalının İstanbul ... İcra Müdürlüğünün ... Esas sayılı takibe itirazının kısmen iptaline, takibin kaldığı yerden asıl alacak 200.000,00 TL ve faiz üzerinden devamına, işlemiş faiz talebi bakımından ispat olunamayan 13.857,53 TL için davanın reddine, şartları oluşmayan icra-inkar tazminat talebin reddine,\" karar verilmiştir. Bu karara karşı davacı vekili ve davalı vekili istinaf başvurusunda bulunmuştur. Davacı vekili istinaf başvuru dilekçesinde özetle; yerel mahkeme tarafından takipten önce davalının temerrüde düşürüldüğünün ispat edilememesi nedeniyle işlemiş faiz talebi bakımından davanın reddine karar verilmesinin somut olayın niteliğine ve hukuka açıkça aykırılık teşkil ettiğini belirterek istinaf yasa yoluna başvurmuştur. Davalı vekili istinaf başvuru dilekçesinde özetle; verilen karar usul ve yasaya aykırı olduğunu, belirtmiş oldukları gerekçeler ve kararın esasına etki eden taleplerinin dikkate alınmadığını, açık yasal düzenlemelerin hiçbir şekilde irdelenmediğini, davalı tarafın davaya konu hırsızlık suçunun işlenmesinde herhangi bir ihmal ve kusurunun bulunmadığını, müvekkillerinin ve çalışanlarının konu suç olayında bir kusuru bulunmadıklarını, sanıkların çalışanları darp etmek suretiyle aracı çaldıklarının açık olduğunu, her ne kadar TMK 74. madde gereğine dayanarak yerel mahkemece hüküm kurulmuş ise de hukuk mahkemeleri maddi vakıalarla bağlı olsa da sanıkların mahkumiyet ve beraat kararlarıyla bağlı olmadığını, bu nedenle salt sanıkların kendilerini kurtarmak amacıyla verdikleri soyut beyanlarına itibar edilerek karar verilmesinin hukuka aykırı olduğunu, garaj ve otopark işletenin motorlu taşıtını bırakanın taşıtına ve eklentilerine gelen zarardan sorumluluğu TBK’nda kusursuz sorumluluk olarak düzenlendiğini, bununla birlikte, bazı hallerde bu sorumluluğun sınırlandırılması bazı hallerde ise tamamen kaldırılması yönünde hükümlere de yer verildiğini, TBK'nın 579. maddesi kusursuz sorumluluğu miktar itibariyle sınırlandırdığını, kabul anlamına gelmemek kaydıyla sorumlu tutulacak olsa dahi müvekkilinin kusursuz olduğundan bahisle üst sınırdan sorumlu tutulması gerektiğini belirterek istinaf yasa yoluna başvurmuştur. Dava kasko sözleşmesinden kaynaklanan tazminat istemine ilişkin olup istinaf açısından uyuşmazlık konusu HMK'nın 355. maddesine göre kamu düzeni ve istinaf nedenleri ile sınırlı olmak üzere İlk Derece Mahkemesince verilen kararın usul, yasa ve dosya içeriğine uygun olup olmadığıdır. Davacıya kasko sigortalı bulunan aracın davalının işlettiği oto yıkama işyerine bırakılması ile sigortalı araç sürücüsü ile oto yıkama işletmecisi arasında 6098 sayılı TBK'nun 561 vd. maddelerinde düzenlenmiş olan vedia (saklama) sözleşmesi ilişkisi kurulmuştur. TBK'nun 561 vd. maddelerinde düzenlenen vedia akdi gereği, menkul bir malı saklamak üzere alan malı aldığı şekliyle teslim etmekle yükümlüdür, kanunun kendine yüklediği yükümlülüğe uymayan saklayan bu nedenle oluşacak zararlardan sorumludur. TBK'nın 579 maddesi uyarınca da sorumluluğu vardır. Davacıya kasko sigortalı aracın davalıya ait oto yıkamada bulunduğu sırada çalındığı hususları taraflar arasında ihtilaf konusu değildir. Taraflar arasında ihtilaflı olan husus, sigortalı aracın çalınması olayında davalının kusurunun bulunup bulunmadığı noktasındadır. Bu durumda mahkemece, davaya konu rücuen tazminat isteminin dayanağı olan, davacının sigortaladığı aracın çalınması olayı ile ilgili olarak İstanbul 14. ACM 2015/201 E. 2017/46 karar sayılı kararı ile\"Mülkiyeti ... AŞ isimli tüzel kişiliğe ait olup, ... AŞ adlı başka bir şirkete kiralanan ve suç tarihi olan 09/12/2013 günü şirket çalışanı özel şoför ...'ın kullanımında olan ... plaka sayılı 2012 model ... marka kiralık otomobilin ... adlı şoför tarafından olay tarihinde gün içerisinde Eyüp / Göktürk Polis Merkezi Amirliği mıntıkasında yer alan, mağdur tanık ve diğer tanıkların çalışanı olduğu Selanik Bulvarı üzerindeki ... adlı işyerine yıkatmak için bırakıldığı, evvelinde de birlikte çok sayıda otomobil hırsızlığı gerçekleştiren ve deyim yerindeyse bizatihi ...'in beyanına nazaran profesyonel oto hırsızları olan sanıklar ... ve ...'ın yanlarında ... isimli açık kimlik bilgileri tam olarak tespit edilemeyen 3.bir şahıs olduğu halde ... marka başka bir araç ile araç yıkatma bahanesi ile oto yıkamacıya geldikleri, ... isimli kimliği meçhul failin araç içerisinden inmediği, yıkamacı çalışanları ... ve ...'ın başka işlerle ilgilenmesi sırasında bu boşluktan faydalanan sanık ...'in anahtarlık yerinde asılı halde bulunan suça konu aracın kontak anahtarını fark ettirmeden bulunduğu yerden aldığı, diğer sanık ...'ın ise direksiyon tarafına geçtiği, aracın kilitli kapılarını açıp çalıştırıp hareket ettirerek birlikte hızla olay yerinden ayrıldıkları, daha sonra çaldıkları aracı 12.500 - 15.000-TL bir bedel ile ... isimli çalıntı araç parçaları satın alan bir şahsa sattıkları, olayın oluş ve meydana geliş biçiminin bu şekilde cereyan ettiği vicdani sonuç ve kanısına varılmakla...\" gerekçesi ile dava dışı üçüncü kişiler ... ve ... hakkında hırsızlık suçundan cezalandırılmalarına karar verildiği kararın kesinleştiği görülmüştür. Yargıtay’ın yerleşik uygulamasına ve öğretideki genel kabule göre, maddi olgunun tespitine ilişkin ceza mahkemesi kararı hukuk hakimini bağlar. Ceza mahkemesinde bir maddi olayın varlığı ya da yokluğu konusundaki kesinleşmiş kabule rağmen, aynı konunun hukuk mahkemesinde yeniden tartışılması olanaklı değildir (HGK'nun 11.10.1989 gün ve E:1989/11-373, K:472, HGK'nun 27.04.2011 gün ve E:2011/17-50, K:2011/231 sayılı ilamları). Türk Borçlar Kanunu'nun 74. maddesi gereğince, hukuk hakimi ceza hakiminin tespit ettiği kusurla bağlı değil ise de Ceza Mahkemesince tespit edilen fiilin hukuka aykırılığı ve illiyet bağını saptayan maddi vakalar yönünden Ceza Mahkemesi kararı ile bağlıdır. Bu kapsamda ceza mahkemesince maddi vaka değerlendirilirken olayın oluşunun belirtildiği, bu kararın kesinleşmiş olması durumunda bu maddi olgu artık hukuk mahkemesi için de bağlayıcı niteliktedir. Bu hususa değinen istinaf talebi yerinde değildir. Ancak ceza dosyası kapsamında davaya konu olay kapsamında davalının kusuru bulunup bulunmadığı yönünden bir değerlendirme yapılmadığı görülmüştür. Bu nedenle mahkemece İstanbul 14. ACM 2015/201 E. 2017/46 karar sayılı dosya aslının celbi sağlanarak, olay yeri kayıtların, iş yerinin çalışma şekli, müşteri araçlarının anahtarlarının tutulduğu yer ve bu yerin nasıl korunduğu, anahtarların nasıl muhafaza edildiği tespit edilerek davalı oto yıkama işletmecisinin kusuru tespit edilmeden ve TBK'nın 579/2 maddesinde belirtilen şartlar değerlendirilmeden karar verilmesi eksik incelemeye dayalı olmuştur. Trafik kazaları, nitelikleri itibariyle haksız fiillerdendir. Haksız fiillerde temerrüt tarihi, haksız fiilin meydana geldiği tarih olup, zarar sorumlusunun ayrıca ihbar ve ihtar edilmesine gerek yoktur. Sigorta ettirenin dava hakkı tazmin ettiği bedel nispetinde sigortacıya intikal eder. Ödeme tarihi aynı zamanda 3. şahsa rücu edebilme tarihidir. Bu nedenle işleten ve sürücünün faizden sorumluluğunun başlangıcının halefiyet başlangıcı olan ödeme tarihi olarak kabulü gerekir. Bu hale göre sigorta şirketinin sigortalısına ödeme tarihinden takip tarihine kadar işlemiş faizin hesaplanarak hüküm altına alınması gerekirken yazılı şekilde karar verilmiş olması isabetli olmamıştır (Yargıtay 17. Hukuk Dairesinin 2013/21198 E. ve 2014/1568 K.sayılı kararı). Açıklanan nedenlerle, davacı vekili ile davalı vekilinin istinaf başvurusunun kabulü ile HMK'nın 353/1-a/6. maddesi uyarınca İlk Derece Mahkemesi kararının kaldırılmasına, dosyanın yukarıda belirtilen şekilde işlem yapılmak üzere mahkemesine gönderilmesine karar verilmiştir."]}, {"source_sentence": "Davacı vekili dava dilekçesinde özetle; davalı- borçlu ile müvekkili arasında, davalı- borçlu tarafından işletilen \"...\" isimli işletmesinde müvekkil şirkete ait mamullerin satışı ile ilgili olarak 28/01/2019 tarihli Satış Noktası Sözleşmesinin imzalandığını, müvekkili olduğunu şirketin sözleşmede kararlaştırılan bütün edimlerini eksiksiz olarak yerine getirdiğini, kendisinden talep edilen ürün teslimlerini zamanında yaptığını, ürünlerin müşterilerine sağlıklı bir şekilde sunulabilmesi için soğutucuların teslim edildiğini, sözleşmede kararlaştırılan iskontoların uyguladığını, yine sözleşmenin Ek Özel Şartının 5. Maddesi gereğince yükümlendiği nakit yardımı- kdv dahil 23.600,00-TL 'yi davalıya verdiğini, fakat davalı- borçlunun şirket sözleşmesinde kararlaştırılan yükümlülüklerini yerine getirmediğini cari hesap borcunu vadesinde ödemediğini, sözleşmenin özen borcundan belirtilen aylık olarak en az 84 kasa koli ürün kotasını doldurmadığını, sözleşmede kararlaştırılan 2000 kasa koli ürün kotasını doldurmadan ürün alımını kestiğini, davalı- borçlunun sözleşmeye aykırı davranışı nedeniyle nakit yardımının iadesi ve cari hesap borcu için İzmir ... İcra müdürlüğünün .../... esas sayılı dosyasıyla ilamsız icra takibini yaptığını davalı- borçlunun söz konusu takibe itiraz etmesi üzerine takibin durdurulmasına karar verildiğini, yukarıda açıklanan nedenler ile davalı- borçlunun haksız ve kötüniyetli olarak takibi sürüncemede bırakmak kastıyla borca ve tüm ferilerine itiraz ettiğini ve takibin durdurulmasına neden olduğunu, bu nedenle davalı- borçlular aleyhine %20'den az olmamak üzere icra inkar tazminatına hükmedilmesini, yargılama giderleri ile vekalet ücretini davalı tarafa yükletilmesini talep etmiştir. ", "sentences": ["Pay sahiplerinin çağrı\nveya gündeme madde konulmasına ilişkin istemleri yönetim kurulu tarafından reddedildiği\nveya isteme yedi iş günü içinde olumlu cevap verilmediği takdirde, aynı pay sahiplerinin\nbaşvurusu üzerine, genel kurulun toplantıya çağrılmasına şirket merkezinin bulunduğu\nyerdeki asliye ticaret mahkemesi karar verebilir. Mahkeme toplantıya gerek görürse,\ngündemi düzenlemek ve Kanun hükümleri uyarınca çağrıyı yapmak üzere bir kayyım atar.\nKararında, kayyımın, görevlerini ve toplantı için gerekli belgeleri hazırlamaya\nilişkin yetkilerini gösterir. Zorunluluk olmadıkça mahkeme dosya üzerinde inceleme\nyaparak karar verir. Karar kesindir.", " Alıcı, devraldığı satılanın durumunu işlerin olağan\nakışına göre imkân bulunur bulunmaz gözden geçirmek ve satılanda satıcının\nsorumluluğunu gerektiren bir ayıp görürse, bunu uygun bir süre içinde ona\nbildirmek zorundadır.\nAlıcı gözden geçirmeyi ve bildirimde\nbulunmayı ihmal ederse, satılanı kabul etmiş sayılır. Ancak, satılanda olağan\nbir gözden geçirmeyle ortaya çıkarılamayacak bir ayıp bulunması hâlinde, bu\nhüküm uygulanmaz. Bu tür bir ayıbın bulunduğu sonradan anlaşılırsa, hemen\nsatıcıya bildirilmelidir; bildirilmezse satılan bu ayıpla birlikte kabul edilmiş\nsayılır.", "Davacı vekili dava dilekçesinde özetle; Davacı vekilinin 15.01.2021 harç ikmal tarihli dava dilekçesinde özetle; müvekkil aleyhine ... İcra Müdürlüğünün 11.01.2021 tarih ... E Sayılı dosyası üzerinden başlatılan haksız takibe konu çeke ilişkin müvekkilin borçlu olmadığının tespitine, müvekkil aleyhine başlatılan haksız icra takibinin müvekkil şirketin yetkili hamil olması ve yetkisiz olan davalıya diğer borçlular bakımından ödeme yapılması durumunda müvekkil alacağını tahsil imkanı tehlikeye gireceğinden (... Kon. Tekstil Ltd Şti hariç) tüm borçlular bakımından durdurulması yönünden ihtiyati tedbir kararı verilmesi , takibe konu çekin müvekkil şirkete iade edilmesi talebinde bulunma gereği hasıl olduğu, müvekkil şirketin faaliyet gösterdiği ... İş ... Sn Tic. J Blok No 12-13 .../İstanbul adresinde henüz kimliği bilinmeyen kişiler tarafından Hırsızlık hadisesi meydana geldiği, hırsızlık olayıyla hamili lehtarı müvekkil şirket olan çekler çalındığı, ... Polis Merkezi Amirlğine şüpheliler şikayet edildiği, olaya ilişkin ... C. Başsavcılığının ... Soruşturma dosyası üzerinden devam edildiği, ayrıca ... 1 ATM ... E Sayılı dosyasından Çek zayi nedeniyle çek iptali davası açıldığı, davaya konu çeklere ilişkin toplam 52.515.76 TL teminat yatırıldığı, dosyaya konu çeklere ilişkin ödemeden men yasağı kararı verildiği, karar ilgili bankalara müzekkere ile bildirildiği, ... tarafından düzenlenen ... Bankası /... İstanbul Şb. ... Iban nolu hesaba ait 31.12.2020 keşide tarihli ... nolu 5.000 TL bedelli çek de nu davaya konu çeklerden biri olduğu, çeke ilişkin ödeme yasağı konulduğu, İcra takibine dayanak olan çek üzerinde de belirtildiği, konulan kayıtta “çekin karşılığı yoktur TC ... 1 ATM 11.09.2020 tarih ... E Sayılı yasağı gereğince çek hakkında her hangi bir işlem yapılmayarak iade edilmiştir” yazılı olduğu, müvekkilin hamili/lehtarı olduğu çekler ticari ilişkisi olduğu diğer firmalara verilmek üzere cirolu ve imzalı bir şekilde kasasında muhafaza edilmekte iken kimliği belirsiz kişilerce çalındığı, dolayısıyla çek üzerindeki yer alan imza müvekkile ait olduğundan icra Hukuk Mahkemesine başvurulmadığı, Zira İcra Hukuk Mahkemesi dar yetkili olup sadece şekli inceleme yapma yetkisi mevcut olduğundan davalı aleyhine huzurdaki dava ikame edildiği, hırsızlık suçuna ilişkin çeklerden bazıları bankalar ile Faktoring kuruluşlarına ibraz edildiğinde bankalar ve faktöring kuruluşlarınca bilgi verildiği, çek iptaline konu çeklerin henüz davalıya geçmediği bir zaman diliminde ciro zincirinde davalının üstünde yer alan ... Kon Tekstil Ltd Şti’ce bankalara ve faktöring firmalarına ibraz edilmeye çalışıldığı, bunun öğrenilmesi ile ... C. Başsavcılığına ... Sayılı dosyası talepte bulunulduğu, 31.11.2020 tarihinde Savcılık şirketin eski ortağı ... dinlenilmesi için müzekkere yazıldığı, ancak bu kişi henüz dinelemediği, müvekkil ile ... Kon Ltd Şti arasında her hangi bir ticari ilişki bulunmadığı, müvekkil davaya konu çeki ... Kon Ltd Şti’ne ciro edip vermediği, bu nedenle müvekkilden sonra sonra çek üzerindeki ciro silsilesi bozulduğu, davalı şirkette çek bakımından yetkili hamil sıfatına haiz olmadığı, müvekkil aleyhine başlatılan haksız icra takibi öncesinde çek iptali davasına teminat yatırılmış olması nedeniyle teminatsız olarak ve halihazırda icra takibine konu çekin iptaline ilişkin davanın derdest oluşu ile diğer borçlularca borcun ödenmesi ihtimaline de müvekkilin alacağını tahsil imkanının tehlike altına girmesi ihtimaline binaen ... Kon Ltd Şti hariç tüm borçlular bakımından durdurulması gereği hasıl olduğu, ayrıca davalı tarafından başlatılan icra takibinde borçlu olan şirketlere yönelik ihtiyati haciz kararı talep edilmiş ve henüz şirketler aleyhine ihtiyati haciz kararı verilmemişse de müvekkil şirketin haksız ve mesnetsiz şekilde haciz tehdidi altında olduğu, Davalı tarafça ... ATM ... D.İş sayılı dosyasına henüz teminat yatırılmamış olup söz konusu teminatın yatırılması halinde davalıya iade edilmesine muvafakat edilmediği,TTK.792 m. Gereğince çeki kötü niyetli elde bulunduranın çek, geri vermekle yükümlü olduğu, arz ve izah edilen nedenlerle; müvekkilin çalıntı çeke dayalı yetkisiz hamil tarafından haksız yere başlatılan İcra takibi nedeniyle zarara uğramasını önlemek amacıyla ... 1. ATM ... E Sayılı dosyasına teminat yatırılmış olunması sebebiyle ... İcra Md ... E Sayılı dosyasından başlatılan takibin yargılama sonuna kadar teminatsız olarak takibin tedbiren durdurulmasına, aksi kanaate olunur ise; Uygun teminat karşılığında takibin tedbiren durdurulmasına, müvekkilin çekten kaynaklanan alacağının tahsil imkanının tehlike altına girmesi ihtimali kuvvetle muhtemel olması nedeniyle durdurma kararının ... Kon Ltd Şti hariç tüm borçlular adına verilmesini, TTK.792 gereğince müvekkilin yetkili hamil olduğu çekin iadesine, yargılama giderleri, vekalet ücretinin davalıya yüklenmesine, davalı aleyhine %20 tazminata hükmedilmesine karar verilmesi talep ve dava etmiştir."]}, {"source_sentence": "answers-forums", "sentences": ["1017", "main-forums", "1.80", "Pek çok çocuk, ödülle motive olmak yerine, kontrol altında olmaktan motive olur.", "Bir olasılık, ev işleri için ödül (ler) i belirleme amacını taşıyan bir aile toplantısı yapmaktır.", "2015"]}], "model-index": [{"name": "SentenceTransformer based on Supabase/gte-small", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.42703730702392106, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.434696021205193, "name": "Spearman Cosine"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 46,767 |
tablane/distilbert-base-uncased.finetuned-emotion
|
tablane
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-27T19:21:18Z |
2024-01-27T19:31:11+00:00
| 3 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased.finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.924
name: Accuracy
- type: f1
value: 0.9240046085344084
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased.finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2149
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.814 | 1.0 | 250 | 0.3135 | 0.903 | 0.9013 |
| 0.2487 | 2.0 | 500 | 0.2149 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased.finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2149
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.814 | 1.0 | 250 | 0.3135 | 0.903 | 0.9013 |
| 0.2487 | 2.0 | 500 | 0.2149 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased.finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.924, "name": "Accuracy"}, {"type": "f1", "value": 0.9240046085344084, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,768 |
nbogdan/flant5-xl-2ex-paraphrasing-3epochs
|
nbogdan
| null |
[
"adapter-transformers",
"t5",
"adapterhub:self-explanations",
"dataset:self-explanations",
"region:us"
] | 2023-09-05T17:13:25Z |
2023-09-05T17:13:47+00:00
| 0 | 0 |
---
datasets:
- self-explanations
tags:
- adapter-transformers
- t5
- adapterhub:self-explanations
---
# Adapter `nbogdan/flant5-xl-2ex-paraphrasing-3epochs` for google/flan-t5-xl
An [adapter](https://adapterhub.ml) for the `google/flan-t5-xl` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-xl")
adapter_name = model.load_adapter("nbogdan/flant5-xl-2ex-paraphrasing-3epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
| null |
Non_BioNLP
|
# Adapter `nbogdan/flant5-xl-2ex-paraphrasing-3epochs` for google/flan-t5-xl
An [adapter](https://adapterhub.ml) for the `google/flan-t5-xl` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-xl")
adapter_name = model.load_adapter("nbogdan/flant5-xl-2ex-paraphrasing-3epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"datasets": ["self-explanations"], "tags": ["adapter-transformers", "t5", "adapterhub:self-explanations"]}
|
task
|
[
"PARAPHRASING"
] | 46,769 |
Gopal2002/setfit_zeon
|
Gopal2002
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"region:us"
] | 2024-01-02T11:35:28Z |
2024-01-16T06:58:07+00:00
| 4 | 0 |
---
base_model: BAAI/bge-small-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: <s_cord-v2><s_menu><s_nm> HINALCO INDUSTRIES LTB. HIRAKUR</s_nm><s_unitprice>
1344</s_unitprice><s_cnt> 1</s_cnt><s_price> 4,436</s_price><sep/><s_nm> ASTRICA
BRIOC</s_nm><s_unitprice> 12.082</s_unitprice><s_cnt> 1</s_cnt><s_discountprice>
12.027</s_discountprice><s_price> SUSPICY TEMPURA HIRAKUR</s_nm><s_unitprice>
12.027.00.0020</s_discountprice><s_price> PAK SUSHI HIRAKURURUR</s_nm><s_unitprice>
12.027.00.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price>
4,436</s_price><sep/><s_nm> SUSHI SALT CALLOCALI</s_nm><s_unitprice> 12.027.0020</s_unitprice><s_cnt>
1</s_cnt><s_discountprice> 1,003</s_discountprice><s_price> 1,00</s_price></s_menu><s_sub_total><s_subtotal_price>
3,003</s_subtotal_price><s_discount_price> 3,003<sep/> 0.00</s_discount_price></s_sub_total><s_total><s_total_price>
3,00</s_total_price><s_cashprice> 3,00</s_cashprice><s_changeprice> 1,00</s_changeprice></s_total>
- text: <s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LIIMITED</s_nm><s_discountprice>
-*OQU<sep/><s_nm> PYCHE DESIGNCE PURCHASE ORDER</s_nm><sep/><s_nm> WHOCO SUSHINGGA
CHOCO SUSHINGGA CHOCO SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG
SUSHINGGANG SUSHINGGANG SUSHINGGANGHONG SUSHINGGANG SUSHINGGANGHONG SUSHINGGANGHONG
SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG
SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONGHONG
POWER</s_nm><s_price> SUSHINGGANGHONGHONGHONG POWER</s_nm><s_price> SUSHINGGANGHONGHONG
POWER</s_nm><s_price> SUSHINGGANGGANGGANGGANGGANGGANGGANGGANGGA SUSHINGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGA
- text: <s_cord-v2><s_menu><s_nm> TAX INVOLICE</s_nm><s_unitprice> 2310</s_unitprice><s_cnt>
2</s_cnt><s_price> A</s_price><sep/><s_nm> BLOOM Combustion India Putu</s_nm><s_unitprice>
150,000</s_unitprice><s_cnt> 2</s_cnt><s_discountprice> 1,040<sep/><s_nm> A.C.B.C.B.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C
- text: <s_cord-v2><s_menu><s_nm> HINA DLCO INDUSTRIES LIMITED</s_nm><s_price> SUSHIZE</s_price><sep/><s_nm>
PONE CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO
CHOCOCO CHOCO CHOCOCO CHOCO CHOCOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
- text: '<s_cord-v2><s_menu><s_nm> HNDALCO INDUSTRIES LID. HIRAKUND POWER</s_nm><s_num>
ASH WITCH BRIOGE</s_nm><s_num> HPOM: 01-Hou DATE: 0001-social<sep/><s_nm> SAH</s_nm><s_num>
DAGE NUMBER : 1</s_etc><sep/><s_nm> SINO TAKING ODAYS OATE INTINE TAKE CROSS Wc
OLOAD SLOOPPERATOR</s_nm><s_num> JGGC</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JERCEA</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER<s_num>
JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num>
JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_num><s_price> 0.00</s_price><sep/><s_nm> ORANGA</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_total>'
inference: true
model-index:
- name: SetFit with BAAI/bge-small-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 1.0
name: Accuracy
---
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'<s_cord-v2><s_menu><s_nm> M/S. JOSEPH SUNA</s_nm><s_num> DATABI<sep/><s_nm> Tankipaa.MIRAKU SAMBALPUR.768Q16</s_nm><s_num> DMB Nb N.9861345883<sep/><s_nm> Deals Which :Altypes of ChickenGNALOR REGUMING)</s_nm><s_num> DISALI<sep/><s_nm> WINNALIZED</s_nm><s_num> CHOCO SUSPECIALIZE</s_nm><s_num> TWICENCHE<sep/><s_nm> SHRANGKANG POWER</s_nm><s_num> LATHOCO TWICENKO:</s_nm><s_num> JERYUNG CHOCO TWICENKO:</s_nm><s_num> JERYUNG HZYGANGKAN<sep/><s_nm> DIFF-SAWALAPUKU SAMBALPUR.76801GHOLIZEG DATE</s_nm><s_num> DATE</s_nm><s_num> DATE:</s_nm><s_num> 01/01/01/01/01/01/01/01/01/01/01/01/01/01/01/01/01<sep/><s_nm> PAN No.:</s_nm><s_num> PPODATE</s_nm><s_num> 01/01/01/01/01/01/01/01/01/01/01<sep/><s_nm> DATE OPSE<sep/><s_nm> HANDUPPOWER</s_nm><s_num> 30.12221</s_num><s_price> 1,945.00</s_price><sep/><s_nm> SUSPENGGANGURG.GUSTAGUR GUSTAGANGKANGURGUSTAGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTG'</li><li>'<s_cord-v2><s_menu><s_nm> GST INVOLICE</s_nm><s_price> ORIGINAL FOR KEGINGLI</s_nm><s_price> WOUCE BREGRAMING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHI'</li><li>'<s_cord-v2><s_menu><s_nm> TAX INVOICE</s_nm><s_price> ORIGINAL FOR AQUALIZE</s_nm><s_price> SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO '</li></ul> |
| 1 | <ul><li>'<s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LTB.</s_nm><s_unitprice> HIRAKUD POWER</s_nm><sep/><s_nm> ASH WPTCH BRIOGE</s_nm><s_unitprice> TIMOL CATE BRIOUS DATE</s_nm><s_unitprice> SUSCEE</s_nm><s_unitprice> SUSCE</s_unitprice><s_cnt> 1</s_cnt><s_price> SUSCE</s_price><sep/><s_nm> MSCED</s_nm><s_unitprice> SUSCEE</s_nm><s_unitprice> SUSCE</s_unitprice><s_cnt> 1</s_cnt><s_price> SUSCE</s_price><sep/><s_nm> MICHI CHOCO KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KE'</li><li>'<s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LTB.</s_nm><s_unitprice> HIR A KD POWER</s_nm></s_sub><sep/><s_nm> ASH WEICH BRIOGE</s_nm><s_unitprice> 16.36.36m2</s_unitprice><s_cnt> AGE IMPL CAST SUSIC :RING LETS SUSIC SUSIC SUSIC SUSIC SUSIC SUSIC SUSCCE</s_nm></s_sub><sep/><s_nm> MSCHO</s_nm><s_unitprice> 13.45</s_unitprice><s_cnt> 1.36.36</s_cnt><s_price> 6.36</s_price><sep/><s_nm> SUSPICY TEMPLE</s_nm><s_unitprice> 14.50.13.502</s_unitprice><s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREAT TRIPSE TO WBLE</s_nm><s_unitprice> 13.35.5cs</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50</s_unitprice><s_cnt> 1.00.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYA TEMPLE</s_nm><s_unitprice> 13.50</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYA TEMPLE ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYANG TEMPLE ITEMBLE<s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYANG TEMPLE ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTYPE 3.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATSUPER</s_nm><s_unitprice> 13.35.5cs</s_unitprice><s_cnt> 1.00</s_cnt><s_price> 5.940</s_price><sep/><s_nm> 0.00</s_price><sep/><s_nm> BRETYPETROPICPICPICPICYE</s_nm><s_unitprice> 13.50</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTYPE 3.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATPICYEPIC ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATSUPER</s_nm><s_unitprice> 13.50</s_cnt><s_price> 5.940.00</s_price></s_menu><s_sub_total><s_subtotal_price> 0.00</s_subtotal_price><s_tax_price> 13.50</s_tax_price></s_sub_total><s_total><s_total_price> 31.00</s_cnt><s_price> BK.00</s_total_price></s_total>'</li><li>'<s_cord-v2><s_menu><s_nm> ORI ZHDLE TOMI O JAPAN SUSHIKA JERYA CHARGE</s_nm><s_unitprice> @SAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKAStakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakat'</li></ul> |
| 0 | <ul><li>'<s_cord-v2><s_menu><s_nm> HANDALCO 이미지ES LIMITED</s_nm><s_price> SUNDAYGHOCO SUSHIZEH CINCEHANGKAGHOCO SUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKANG PURCHASE ORDER</s_nm><sep/><s_nm> WANTE CHOCO CAKE CONSULATANCE PYI LOTHO NUMPIC UPICK CHOCO CHOCO CHOCOCO SUSHIZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHER</s_nm><s_discountprice>Nt.Minitie HGHOCEHINE</s_nm><s_discountprice>N.Minitie HGUMAGHO</s_nm><s_discountprice>N</s_nm><s_discountprice>N.Minitie HUMAGHO</s_nm><s_discountprice>N</s_nm><s_discountprice>N</s_discountprice><s_price> 436.0</s_price><sep/><s_nm> OxMini WHEN HUMAGHUNG</s_nm><s_discountprice> SUSHIZEHITEGHOUSHILIZEHENCE COTTING THOGEHGHOCO SUSHIZEHITEGHTGHOLIZEHGHOLIZEHGHOLIZEHGHOLIZEHGPICYGLIZEHGHTG SOUTING SUSHIZEHITEGHTGHOLIZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEH'</li><li>'<s_cord-v2><s_menu><s_nm> WINGllaco Industries Limited</s_nm><s_unitprice> LIKING PICCE CHOCOLOGY VICE</s_nm><s_unitprice> LIKING SUSHIBILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILI'</li><li>'<s_cord-v2><s_menu><s_nm> HINDALCO INDUSTRIES LIMITED</s_nm><s_price> GSTING&NAACHI201</s_price><sep/><s_nm> WBABUPOWER HEROGUSTAMPURGANGKANCE CHOCOLOGALINGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGA'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Gopal2002/setfit_zeon")
# Run inference
preds = model("<s_cord-v2><s_menu><s_nm> HINALCO INDUSTRIES LTB. HIRAKUR</s_nm><s_unitprice> 1344</s_unitprice><s_cnt> 1</s_cnt><s_price> 4,436</s_price><sep/><s_nm> ASTRICA BRIOC</s_nm><s_unitprice> 12.082</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> SUSPICY TEMPURA HIRAKUR</s_nm><s_unitprice> 12.027.00.0020</s_discountprice><s_price> PAK SUSHI HIRAKURURUR</s_nm><s_unitprice> 12.027.00.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> 4,436</s_price><sep/><s_nm> SUSHI SALT CALLOCALI</s_nm><s_unitprice> 12.027.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 1,003</s_discountprice><s_price> 1,00</s_price></s_menu><s_sub_total><s_subtotal_price> 3,003</s_subtotal_price><s_discount_price> 3,003<sep/> 0.00</s_discount_price></s_sub_total><s_total><s_total_price> 3,00</s_total_price><s_cashprice> 3,00</s_cashprice><s_changeprice> 1,00</s_changeprice></s_total>")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 5 | 107.8041 | 763 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 47 |
| 1 | 51 |
| 2 | 50 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0022 | 1 | 0.3004 | - |
| 0.1094 | 50 | 0.2457 | - |
| 0.2188 | 100 | 0.1464 | - |
| 0.3282 | 150 | 0.0079 | - |
| 0.4376 | 200 | 0.0028 | - |
| 0.5470 | 250 | 0.0027 | - |
| 0.6565 | 300 | 0.0017 | - |
| 0.7659 | 350 | 0.0014 | - |
| 0.8753 | 400 | 0.0015 | - |
| 0.9847 | 450 | 0.0011 | - |
| 1.0941 | 500 | 0.001 | - |
| 1.2035 | 550 | 0.0011 | - |
| 1.3129 | 600 | 0.001 | - |
| 1.4223 | 650 | 0.0011 | - |
| 1.5317 | 700 | 0.0011 | - |
| 1.6411 | 750 | 0.0009 | - |
| 1.7505 | 800 | 0.0008 | - |
| 1.8600 | 850 | 0.001 | - |
| 1.9694 | 900 | 0.0009 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.2
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'<s_cord-v2><s_menu><s_nm> M/S. JOSEPH SUNA</s_nm><s_num> DATABI<sep/><s_nm> Tankipaa.MIRAKU SAMBALPUR.768Q16</s_nm><s_num> DMB Nb N.9861345883<sep/><s_nm> Deals Which :Altypes of ChickenGNALOR REGUMING)</s_nm><s_num> DISALI<sep/><s_nm> WINNALIZED</s_nm><s_num> CHOCO SUSPECIALIZE</s_nm><s_num> TWICENCHE<sep/><s_nm> SHRANGKANG POWER</s_nm><s_num> LATHOCO TWICENKO:</s_nm><s_num> JERYUNG CHOCO TWICENKO:</s_nm><s_num> JERYUNG HZYGANGKAN<sep/><s_nm> DIFF-SAWALAPUKU SAMBALPUR.76801GHOLIZEG DATE</s_nm><s_num> DATE</s_nm><s_num> DATE:</s_nm><s_num> 01/01/01/01/01/01/01/01/01/01/01/01/01/01/01/01/01<sep/><s_nm> PAN No.:</s_nm><s_num> PPODATE</s_nm><s_num> 01/01/01/01/01/01/01/01/01/01/01<sep/><s_nm> DATE OPSE<sep/><s_nm> HANDUPPOWER</s_nm><s_num> 30.12221</s_num><s_price> 1,945.00</s_price><sep/><s_nm> SUSPENGGANGURG.GUSTAGUR GUSTAGANGKANGURGUSTAGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTG'</li><li>'<s_cord-v2><s_menu><s_nm> GST INVOLICE</s_nm><s_price> ORIGINAL FOR KEGINGLI</s_nm><s_price> WOUCE BREGRAMING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHI'</li><li>'<s_cord-v2><s_menu><s_nm> TAX INVOICE</s_nm><s_price> ORIGINAL FOR AQUALIZE</s_nm><s_price> SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO '</li></ul> |
| 1 | <ul><li>'<s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LTB.</s_nm><s_unitprice> HIRAKUD POWER</s_nm><sep/><s_nm> ASH WPTCH BRIOGE</s_nm><s_unitprice> TIMOL CATE BRIOUS DATE</s_nm><s_unitprice> SUSCEE</s_nm><s_unitprice> SUSCE</s_unitprice><s_cnt> 1</s_cnt><s_price> SUSCE</s_price><sep/><s_nm> MSCED</s_nm><s_unitprice> SUSCEE</s_nm><s_unitprice> SUSCE</s_unitprice><s_cnt> 1</s_cnt><s_price> SUSCE</s_price><sep/><s_nm> MICHI CHOCO KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KE'</li><li>'<s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LTB.</s_nm><s_unitprice> HIR A KD POWER</s_nm></s_sub><sep/><s_nm> ASH WEICH BRIOGE</s_nm><s_unitprice> 16.36.36m2</s_unitprice><s_cnt> AGE IMPL CAST SUSIC :RING LETS SUSIC SUSIC SUSIC SUSIC SUSIC SUSIC SUSCCE</s_nm></s_sub><sep/><s_nm> MSCHO</s_nm><s_unitprice> 13.45</s_unitprice><s_cnt> 1.36.36</s_cnt><s_price> 6.36</s_price><sep/><s_nm> SUSPICY TEMPLE</s_nm><s_unitprice> 14.50.13.502</s_unitprice><s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREAT TRIPSE TO WBLE</s_nm><s_unitprice> 13.35.5cs</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50</s_unitprice><s_cnt> 1.00.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYA TEMPLE</s_nm><s_unitprice> 13.50</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYA TEMPLE ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYANG TEMPLE ITEMBLE<s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYANG TEMPLE ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTYPE 3.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATSUPER</s_nm><s_unitprice> 13.35.5cs</s_unitprice><s_cnt> 1.00</s_cnt><s_price> 5.940</s_price><sep/><s_nm> 0.00</s_price><sep/><s_nm> BRETYPETROPICPICPICPICYE</s_nm><s_unitprice> 13.50</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTYPE 3.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATPICYEPIC ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATSUPER</s_nm><s_unitprice> 13.50</s_cnt><s_price> 5.940.00</s_price></s_menu><s_sub_total><s_subtotal_price> 0.00</s_subtotal_price><s_tax_price> 13.50</s_tax_price></s_sub_total><s_total><s_total_price> 31.00</s_cnt><s_price> BK.00</s_total_price></s_total>'</li><li>'<s_cord-v2><s_menu><s_nm> ORI ZHDLE TOMI O JAPAN SUSHIKA JERYA CHARGE</s_nm><s_unitprice> @SAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKAStakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakat'</li></ul> |
| 0 | <ul><li>'<s_cord-v2><s_menu><s_nm> HANDALCO 이미지ES LIMITED</s_nm><s_price> SUNDAYGHOCO SUSHIZEH CINCEHANGKAGHOCO SUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKANG PURCHASE ORDER</s_nm><sep/><s_nm> WANTE CHOCO CAKE CONSULATANCE PYI LOTHO NUMPIC UPICK CHOCO CHOCO CHOCOCO SUSHIZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHER</s_nm><s_discountprice>Nt.Minitie HGHOCEHINE</s_nm><s_discountprice>N.Minitie HGUMAGHO</s_nm><s_discountprice>N</s_nm><s_discountprice>N.Minitie HUMAGHO</s_nm><s_discountprice>N</s_nm><s_discountprice>N</s_discountprice><s_price> 436.0</s_price><sep/><s_nm> OxMini WHEN HUMAGHUNG</s_nm><s_discountprice> SUSHIZEHITEGHOUSHILIZEHENCE COTTING THOGEHGHOCO SUSHIZEHITEGHTGHOLIZEHGHOLIZEHGHOLIZEHGHOLIZEHGPICYGLIZEHGHTG SOUTING SUSHIZEHITEGHTGHOLIZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEH'</li><li>'<s_cord-v2><s_menu><s_nm> WINGllaco Industries Limited</s_nm><s_unitprice> LIKING PICCE CHOCOLOGY VICE</s_nm><s_unitprice> LIKING SUSHIBILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILI'</li><li>'<s_cord-v2><s_menu><s_nm> HINDALCO INDUSTRIES LIMITED</s_nm><s_price> GSTING&NAACHI201</s_price><sep/><s_nm> WBABUPOWER HEROGUSTAMPURGANGKANCE CHOCOLOGALINGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGA'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Gopal2002/setfit_zeon")
# Run inference
preds = model("<s_cord-v2><s_menu><s_nm> HINALCO INDUSTRIES LTB. HIRAKUR</s_nm><s_unitprice> 1344</s_unitprice><s_cnt> 1</s_cnt><s_price> 4,436</s_price><sep/><s_nm> ASTRICA BRIOC</s_nm><s_unitprice> 12.082</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> SUSPICY TEMPURA HIRAKUR</s_nm><s_unitprice> 12.027.00.0020</s_discountprice><s_price> PAK SUSHI HIRAKURURUR</s_nm><s_unitprice> 12.027.00.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> 4,436</s_price><sep/><s_nm> SUSHI SALT CALLOCALI</s_nm><s_unitprice> 12.027.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 1,003</s_discountprice><s_price> 1,00</s_price></s_menu><s_sub_total><s_subtotal_price> 3,003</s_subtotal_price><s_discount_price> 3,003<sep/> 0.00</s_discount_price></s_sub_total><s_total><s_total_price> 3,00</s_total_price><s_cashprice> 3,00</s_cashprice><s_changeprice> 1,00</s_changeprice></s_total>")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 5 | 107.8041 | 763 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 47 |
| 1 | 51 |
| 2 | 50 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0022 | 1 | 0.3004 | - |
| 0.1094 | 50 | 0.2457 | - |
| 0.2188 | 100 | 0.1464 | - |
| 0.3282 | 150 | 0.0079 | - |
| 0.4376 | 200 | 0.0028 | - |
| 0.5470 | 250 | 0.0027 | - |
| 0.6565 | 300 | 0.0017 | - |
| 0.7659 | 350 | 0.0014 | - |
| 0.8753 | 400 | 0.0015 | - |
| 0.9847 | 450 | 0.0011 | - |
| 1.0941 | 500 | 0.001 | - |
| 1.2035 | 550 | 0.0011 | - |
| 1.3129 | 600 | 0.001 | - |
| 1.4223 | 650 | 0.0011 | - |
| 1.5317 | 700 | 0.0011 | - |
| 1.6411 | 750 | 0.0009 | - |
| 1.7505 | 800 | 0.0008 | - |
| 1.8600 | 850 | 0.001 | - |
| 1.9694 | 900 | 0.0009 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.2
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-small-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "<s_cord-v2><s_menu><s_nm> HINALCO INDUSTRIES LTB. HIRAKUR</s_nm><s_unitprice> 1344</s_unitprice><s_cnt> 1</s_cnt><s_price> 4,436</s_price><sep/><s_nm> ASTRICA BRIOC</s_nm><s_unitprice> 12.082</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> SUSPICY TEMPURA HIRAKUR</s_nm><s_unitprice> 12.027.00.0020</s_discountprice><s_price> PAK SUSHI HIRAKURURUR</s_nm><s_unitprice> 12.027.00.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> 4,436</s_price><sep/><s_nm> SUSHI SALT CALLOCALI</s_nm><s_unitprice> 12.027.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 1,003</s_discountprice><s_price> 1,00</s_price></s_menu><s_sub_total><s_subtotal_price> 3,003</s_subtotal_price><s_discount_price> 3,003<sep/> 0.00</s_discount_price></s_sub_total><s_total><s_total_price> 3,00</s_total_price><s_cashprice> 3,00</s_cashprice><s_changeprice> 1,00</s_changeprice></s_total>"}, {"text": "<s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LIIMITED</s_nm><s_discountprice> -*OQU<sep/><s_nm> PYCHE DESIGNCE PURCHASE ORDER</s_nm><sep/><s_nm> WHOCO SUSHINGGA CHOCO SUSHINGGA CHOCO SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANGHONG SUSHINGGANG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONGHONG POWER</s_nm><s_price> SUSHINGGANGHONGHONGHONG POWER</s_nm><s_price> SUSHINGGANGHONGHONG POWER</s_nm><s_price> SUSHINGGANGGANGGANGGANGGANGGANGGANGGANGGA SUSHINGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGA"}, {"text": "<s_cord-v2><s_menu><s_nm> TAX INVOLICE</s_nm><s_unitprice> 2310</s_unitprice><s_cnt> 2</s_cnt><s_price> A</s_price><sep/><s_nm> BLOOM Combustion India Putu</s_nm><s_unitprice> 150,000</s_unitprice><s_cnt> 2</s_cnt><s_discountprice> 1,040<sep/><s_nm> A.C.B.C.B.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C"}, {"text": "<s_cord-v2><s_menu><s_nm> HINA DLCO INDUSTRIES LIMITED</s_nm><s_price> SUSHIZE</s_price><sep/><s_nm> PONE CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCO CHOCOCO CHOCO CHOCOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO"}, {"text": "<s_cord-v2><s_menu><s_nm> HNDALCO INDUSTRIES LID. HIRAKUND POWER</s_nm><s_num> ASH WITCH BRIOGE</s_nm><s_num> HPOM: 01-Hou DATE: 0001-social<sep/><s_nm> SAH</s_nm><s_num> DAGE NUMBER : 1</s_etc><sep/><s_nm> SINO TAKING ODAYS OATE INTINE TAKE CROSS Wc OLOAD SLOOPPERATOR</s_nm><s_num> JGGC</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JERCEA</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_num><s_price> 0.00</s_price><sep/><s_nm> ORANGA</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_total>"}], "inference": true, "model-index": [{"name": "SetFit with BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,770 |
jwhong2006/wikisum
|
jwhong2006
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"en",
"dataset:d0rj/wikisum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-06T16:09:26Z |
2024-06-06T16:30:30+00:00
| 28 | 0 |
---
base_model: t5-small
datasets:
- d0rj/wikisum
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- rouge
pipeline_tag: summarization
tags:
- generated_from_trainer
widget:
- text: 'Do not shuck or wash your oysters. Oysters taste best when you shuck them
immediately before eating them. In addition, keeping oysters in their shells makes
them easier to store and reduces the chance that they''ll go bad. If your oysters
came pre-shucked in a plastic container, store them in the freezer until you''re
ready to use them. Leave the grit and dirt on the oysters. This will keep them
moist and will help to insulate the meat. Pour ice into a small bowl or other
open-top container. Grab a bowl, small cooler, or similar container that you can
place inside your fridge. Make sure this container has an open top or removable
lid. Then, pour a layer of ice into the bottom of the container. Do not keep your
oysters in a sealed or closed-top container. Doing so will suffocate them. You
may need to change your ice during the refrigeration process, so do not pour any
into the container if you won''t be able to check your oysters regularly. Place
your oysters on top of the ice bed deep side down. Just like seafood merchants,
you''ll be storing your oysters on ice to keep them as chilled and fresh as possible.
Make sure to turn each of your oysters so that the deeper side faces down, a technique
that will help them better retain their juices. Dampen a towel with cold water
and place it on top of the oysters. Dip a thin, clean kitchen towel in cold water
and ring out the excess liquid. Then, gently lay the towel on top of the oysters.
This will keep the oysters from drying out while preventing fresh water poisoning.
If you''d prefer, you can cover the oysters with damp paper towels or newspaper
instead. Oysters are salt water creatures, so submerging them in fresh water will
essentially poison them and lead to their death. Place your container in a refrigerator.
If possible, set your refrigerator to a temperature between 35 and 40 °F (2 and
4 °C). Make sure to store your oysters above any raw meat so the juices don''t
drip down onto your shellfish. If possible, check on your oysters at least once
a day while they''re in the fridge. If the towel dries out, dampen it again. If
the ice in your container melts, pour it out and replace it with new ice. Keep
your oysters in the fridge for up to 2 days. For safety, remove and consume your
oysters within about 2 days of initially storing them. Though some oysters may
last for a week or longer, eating them that late puts you at greater risk of food
poisoning and other unwanted ailments. If your oysters came with an expiration
date, use that as your guide for maximum storage time. Freeze your oysters if
you need to store them for more than 2 days. Shuck the oysters when you’re ready
to eat them. Once you finish storing the oysters, run them under cool water and
open their shells. Then, run a knife under the flat side of the oyster and pop
the shell off. Before eating, carefully separate the oyster from the rest of the
shell using a knife. Before eating an oyster, inspect it to make sure it is still
good. If the shell appears to be damaged, if the oyster smells foul, or if the
meat is a cloudy shade of grey, brown, black, or pink, throw the oyster away.
Keep the oysters in their shells and rinse them off. Storing your oysters inside
their shells will make them less likely to go bad and, in some cases, better preserve
their taste. Unlike refrigerating oysters, rinsing the shells under cold water
to clean them off prevents any bacteria from living on the oysters. If you don''t
have enough room in your freezer to keep full-shelled oysters, you can shuck them
before storage. If you do so, save the internal liquor for later use. Place your
oysters in a freezer-safe container. To keep your oysters safe, place them inside
a moisture-resistant, freezer-safe bag. If you''re storing shucked oysters, you
can use a firm plastic container instead. To prevent freezer burns, leave no more
than 0.5 in (1.3 cm) of head space in the container. Pour oyster liquor into the
container if you’re freezing shucked oysters. To help your shucked oysters retain
their juiciness, pour the liquor you removed during the shucking process into
your freezer-safe container. Keep pouring until you''ve completely submerged the
oysters inside the liquid. If you don''t have enough liquor to fill the container,
pour in water as well. Seal the container. If you''re using a resealable bag,
press any excess air out of it using your fingers. Then, seal your container right
before you put it into the freezer. Unlike with refrigerated oysters, closing
the container will help better preserve your shellfish during long-term storage.
If you''re using a solid plastic container, make sure the lid you seal it with
is air-tight. Make sure to write the initial storage date on your container. Keep
your oysters in the freezer for up to 3 months. When frozen properly, fresh oysters
should last for between 2 and 3 months. To make sure your oysters aren''t going
bad, look over them regularly and remove any that have cracked shells or cloudy
meat that is a pink, black, brown, or grey color. While your oysters may remain
safe to eat during this time, the taste will degrade gradually. Thaw your oysters
in the fridge before consuming. Carefully take your oyster container out of the
freezer and place it in a clear, open part of your refrigerator. Depending on
the exact temperature of your appliances, the thawing process could take up to
20 hours to complete. Thawing your oysters using this method gives them a slightly
longer shelf life, meaning you don''t have to use them immediately after they
thaw. If you''d like, you can thaw your oysters by submerging their container
in cold water. However, you''ll have to consume them immediately after they thaw,
otherwise they''ll go bad. '
model-index:
- name: wikisum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wikisum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an [wikisum](https://huggingface.co/datasets/d0rj/wikisum) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2922
- Rouge1: 0.1811
- Rouge2: 0.0673
- Rougel: 0.147
- Rougelsum: 0.147
- Gen Len: 19.0
## Model description
t5-small model fine-tuned on wikisum dataset.
## Intended uses & limitations
Intended use: sumamrization of informatic articles.
Limitations : may generate misleading information.
## Training and evaluation data
check out [wikisum](https://huggingface.co/datasets/d0rj/wikisum) dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.5807 | 0.2236 | 500 | 2.3647 | 0.1813 | 0.0635 | 0.1452 | 0.1453 | 19.0 |
| 2.5059 | 0.4472 | 1000 | 2.3190 | 0.1823 | 0.0663 | 0.1473 | 0.1473 | 19.0 |
| 2.4945 | 0.6708 | 1500 | 2.3003 | 0.1808 | 0.0666 | 0.1468 | 0.1467 | 19.0 |
| 2.4963 | 0.8945 | 2000 | 2.2922 | 0.1811 | 0.0673 | 0.147 | 0.147 | 19.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wikisum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an [wikisum](https://huggingface.co/datasets/d0rj/wikisum) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2922
- Rouge1: 0.1811
- Rouge2: 0.0673
- Rougel: 0.147
- Rougelsum: 0.147
- Gen Len: 19.0
## Model description
t5-small model fine-tuned on wikisum dataset.
## Intended uses & limitations
Intended use: sumamrization of informatic articles.
Limitations : may generate misleading information.
## Training and evaluation data
check out [wikisum](https://huggingface.co/datasets/d0rj/wikisum) dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.5807 | 0.2236 | 500 | 2.3647 | 0.1813 | 0.0635 | 0.1452 | 0.1453 | 19.0 |
| 2.5059 | 0.4472 | 1000 | 2.3190 | 0.1823 | 0.0663 | 0.1473 | 0.1473 | 19.0 |
| 2.4945 | 0.6708 | 1500 | 2.3003 | 0.1808 | 0.0666 | 0.1468 | 0.1467 | 19.0 |
| 2.4963 | 0.8945 | 2000 | 2.2922 | 0.1811 | 0.0673 | 0.147 | 0.147 | 19.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
{"base_model": "t5-small", "datasets": ["d0rj/wikisum"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "pipeline_tag": "summarization", "tags": ["generated_from_trainer"], "widget": [{"text": "Do not shuck or wash your oysters. Oysters taste best when you shuck them immediately before eating them. In addition, keeping oysters in their shells makes them easier to store and reduces the chance that they'll go bad. If your oysters came pre-shucked in a plastic container, store them in the freezer until you're ready to use them. Leave the grit and dirt on the oysters. This will keep them moist and will help to insulate the meat. Pour ice into a small bowl or other open-top container. Grab a bowl, small cooler, or similar container that you can place inside your fridge. Make sure this container has an open top or removable lid. Then, pour a layer of ice into the bottom of the container. Do not keep your oysters in a sealed or closed-top container. Doing so will suffocate them. You may need to change your ice during the refrigeration process, so do not pour any into the container if you won't be able to check your oysters regularly. Place your oysters on top of the ice bed deep side down. Just like seafood merchants, you'll be storing your oysters on ice to keep them as chilled and fresh as possible. Make sure to turn each of your oysters so that the deeper side faces down, a technique that will help them better retain their juices. Dampen a towel with cold water and place it on top of the oysters. Dip a thin, clean kitchen towel in cold water and ring out the excess liquid. Then, gently lay the towel on top of the oysters. This will keep the oysters from drying out while preventing fresh water poisoning. If you'd prefer, you can cover the oysters with damp paper towels or newspaper instead. Oysters are salt water creatures, so submerging them in fresh water will essentially poison them and lead to their death. Place your container in a refrigerator. If possible, set your refrigerator to a temperature between 35 and 40 °F (2 and 4 °C). Make sure to store your oysters above any raw meat so the juices don't drip down onto your shellfish. If possible, check on your oysters at least once a day while they're in the fridge. If the towel dries out, dampen it again. If the ice in your container melts, pour it out and replace it with new ice. Keep your oysters in the fridge for up to 2 days. For safety, remove and consume your oysters within about 2 days of initially storing them. Though some oysters may last for a week or longer, eating them that late puts you at greater risk of food poisoning and other unwanted ailments. If your oysters came with an expiration date, use that as your guide for maximum storage time. Freeze your oysters if you need to store them for more than 2 days. Shuck the oysters when you’re ready to eat them. Once you finish storing the oysters, run them under cool water and open their shells. Then, run a knife under the flat side of the oyster and pop the shell off. Before eating, carefully separate the oyster from the rest of the shell using a knife. Before eating an oyster, inspect it to make sure it is still good. If the shell appears to be damaged, if the oyster smells foul, or if the meat is a cloudy shade of grey, brown, black, or pink, throw the oyster away. Keep the oysters in their shells and rinse them off. Storing your oysters inside their shells will make them less likely to go bad and, in some cases, better preserve their taste. Unlike refrigerating oysters, rinsing the shells under cold water to clean them off prevents any bacteria from living on the oysters. If you don't have enough room in your freezer to keep full-shelled oysters, you can shuck them before storage. If you do so, save the internal liquor for later use. Place your oysters in a freezer-safe container. To keep your oysters safe, place them inside a moisture-resistant, freezer-safe bag. If you're storing shucked oysters, you can use a firm plastic container instead. To prevent freezer burns, leave no more than 0.5 in (1.3 cm) of head space in the container. Pour oyster liquor into the container if you’re freezing shucked oysters. To help your shucked oysters retain their juiciness, pour the liquor you removed during the shucking process into your freezer-safe container. Keep pouring until you've completely submerged the oysters inside the liquid. If you don't have enough liquor to fill the container, pour in water as well. Seal the container. If you're using a resealable bag, press any excess air out of it using your fingers. Then, seal your container right before you put it into the freezer. Unlike with refrigerated oysters, closing the container will help better preserve your shellfish during long-term storage. If you're using a solid plastic container, make sure the lid you seal it with is air-tight. Make sure to write the initial storage date on your container. Keep your oysters in the freezer for up to 3 months. When frozen properly, fresh oysters should last for between 2 and 3 months. To make sure your oysters aren't going bad, look over them regularly and remove any that have cracked shells or cloudy meat that is a pink, black, brown, or grey color. While your oysters may remain safe to eat during this time, the taste will degrade gradually. Thaw your oysters in the fridge before consuming. Carefully take your oyster container out of the freezer and place it in a clear, open part of your refrigerator. Depending on the exact temperature of your appliances, the thawing process could take up to 20 hours to complete. Thawing your oysters using this method gives them a slightly longer shelf life, meaning you don't have to use them immediately after they thaw. If you'd like, you can thaw your oysters by submerging their container in cold water. However, you'll have to consume them immediately after they thaw, otherwise they'll go bad. "}], "model-index": [{"name": "wikisum", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 46,771 |
SZTAKI-HLT/Bert2Bert-HunSum-1
|
SZTAKI-HLT
|
text2text-generation
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"hubert",
"bert",
"summarization",
"hu",
"dataset:SZTAKI-HLT/HunSum-1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-07T10:35:58Z |
2023-01-24T16:21:16+00:00
| 137 | 2 |
---
datasets:
- SZTAKI-HLT/HunSum-1
language:
- hu
metrics:
- rouge
pipeline_tag: text2text-generation
tags:
- hubert
- bert
- summarization
inference:
parameters:
num_beams: 5
length_penalty: 2
max_length: 128
no_repeat_ngram_size: 3
early_stopping: true
---
# Model Card for Bert2Bert-HunSum-1
The Bert2Bert-HunSum-1 is a Hungarian abstractive summarization model, which was trained on the [SZTAKI-HLT/HunSum-1 dataset](https://huggingface.co/datasets/SZTAKI-HLT/HunSum-1).
The model is based on [SZTAKI-HLT/hubert-base-cc](https://huggingface.co/SZTAKI-HLT/hubert-base-cc).
## Intended uses & limitations
- **Model type:** Text Summarization
- **Language(s) (NLP):** Hungarian
- **Resource(s) for more information:**
- [GitHub Repo](https://github.com/dorinapetra/summarization)
## Parameters
- **Batch Size:** 13
- **Learning Rate:** 5e-5
- **Weight Decay:** 0.01
- **Warmup Steps:** 16000
- **Epochs:** 15
- **no_repeat_ngram_size:** 3
- **num_beams:** 5
- **early_stopping:** True
## Results
| Metric | Value |
| :------------ | :------------------------------------------ |
| ROUGE-1 | 28.52 |
| ROUGE-2 | 10.35 |
| ROUGE-L | 20.07 |
## Citation
If you use our model, please cite the following paper:
```
@inproceedings {HunSum-1,
title = {{HunSum-1: an Abstractive Summarization Dataset for Hungarian}},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Barta, Botond and Lakatos, Dorina and Nagy, Attila and Nyist, Mil{\'{a}}n Konor and {\'{A}}cs, Judit},
pages = {231--243}
}
```
| null |
Non_BioNLP
|
# Model Card for Bert2Bert-HunSum-1
The Bert2Bert-HunSum-1 is a Hungarian abstractive summarization model, which was trained on the [SZTAKI-HLT/HunSum-1 dataset](https://huggingface.co/datasets/SZTAKI-HLT/HunSum-1).
The model is based on [SZTAKI-HLT/hubert-base-cc](https://huggingface.co/SZTAKI-HLT/hubert-base-cc).
## Intended uses & limitations
- **Model type:** Text Summarization
- **Language(s) (NLP):** Hungarian
- **Resource(s) for more information:**
- [GitHub Repo](https://github.com/dorinapetra/summarization)
## Parameters
- **Batch Size:** 13
- **Learning Rate:** 5e-5
- **Weight Decay:** 0.01
- **Warmup Steps:** 16000
- **Epochs:** 15
- **no_repeat_ngram_size:** 3
- **num_beams:** 5
- **early_stopping:** True
## Results
| Metric | Value |
| :------------ | :------------------------------------------ |
| ROUGE-1 | 28.52 |
| ROUGE-2 | 10.35 |
| ROUGE-L | 20.07 |
## Citation
If you use our model, please cite the following paper:
```
@inproceedings {HunSum-1,
title = {{HunSum-1: an Abstractive Summarization Dataset for Hungarian}},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Barta, Botond and Lakatos, Dorina and Nagy, Attila and Nyist, Mil{\'{a}}n Konor and {\'{A}}cs, Judit},
pages = {231--243}
}
```
|
{"datasets": ["SZTAKI-HLT/HunSum-1"], "language": ["hu"], "metrics": ["rouge"], "pipeline_tag": "text2text-generation", "tags": ["hubert", "bert", "summarization"], "inference": {"parameters": {"num_beams": 5, "length_penalty": 2, "max_length": 128, "no_repeat_ngram_size": 3, "early_stopping": true}}}
|
task
|
[
"SUMMARIZATION"
] | 46,772 |
iryneko571/mt5-translation-ja_zh-game-small
|
iryneko571
|
translation
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"translation",
"ja",
"zh",
"dataset:ayymen/Pontoon-Translations",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-02-05T15:32:42Z |
2024-07-04T10:41:17+00:00
| 23 | 0 |
---
datasets:
- ayymen/Pontoon-Translations
language:
- ja
- zh
license: mit
pipeline_tag: translation
widget:
- text: <-ja2zh-> フェルディナント・ラッサール \n は、プロイセンの政治学者、哲学者、法学者、社会主義者、労働運動指導者。ドイツ社会民主党の母体となる全ドイツ労働者同盟の創設者である。社会主義共和政の統一ドイツを目指しつつも、……
inference:
parameters:
repetition_penalty: 1.4
---
# new model:
iryneko571/mt5-small-translation-ja_zh<br>
better in most aspects, more like a base model with pure data<br>
数值上更好,是用更纯的数据跑的<br>
includes colab notebook<br>
已经配置了colab的notebook,可以直接测试翻译,不需要安装<br>
# Release Notes
* this model is finetuned from mt5-small
* will use about 1.5G vram, fp16 will be less than 1G(if batch size is small), cpu inference speed is ok anyway
* used a trimmed piece of pontoon dataset that features ja to zh translate part
* also scrambled bunch of the translation from mt5-translation-ja_zh-game-v0.1, which is a large amount of junk for training
* reason for making this model<br>
Testing the ideas of using pontoon dataset<br>
Constructing a flexible translation evaluation standard, need a poor performance model to compare
# 模型公开声明
* 这个模型由 mt5-translation-ja_zh 继续训练得来
* 使用大于1.5g的显存,fp16载入会小于1G显存(batch拉高会大于1G),使用cpu运作速度也还可以
* 制作这个模型的原因<br>
尝试使用现有的模型精调,小模型训练速度奇快<br>
* 本模型缺陷<br>
本身就是用来做测试的,虽然使用的显存很低,但翻译能力奇差<br>
# 简单的后端应用
还没稳定调试,慎用
* https://github.com/IryNeko/RabbitCafe
# A more precise example using it
# 使用指南
```python
from transformers import pipeline
model_name="iryneko571/mt5-translation-ja_zh-game-small"
#pipe = pipeline("translation",model=model_name,tokenizer=model_name,repetition_penalty=1.4,batch_size=1,max_length=256)
pipe = pipeline("translation",
model=model_name,
repetition_penalty=1.4,
batch_size=1,
max_length=256
)
def translate_batch(batch, language='<-ja2zh->'): # batch is an array of string
i=0 # quickly format the list
while i<len(batch):
batch[i]=f'{language} {batch[i]}'
i+=1
translated=pipe(batch)
result=[]
i=0
while i<len(translated):
result.append(translated[i]['translation_text'])
i+=1
return result
inputs=[]
print(translate_batch(inputs))
```
# Roadmap
* Scamble more translation results from gpt4o, gpt3.5, claude, mt5 and other sources to make a more messy input
* increase translation accuracy
* apply lora on it and apply int8 inference to further decrease hardware requirements
* create onnx and ncnn model
# how to find me
# 找到作者
Discord Server:<br>
https://discord.gg/JmjPmJjA<br>
If you need any help, a test server or just want to chat<br>
如果需要帮助,需要试试最新的版本,或者只是为了看下我是啥,可以进channel看看(这边允许发布这个吗?)<br>
| null |
Non_BioNLP
|
# new model:
iryneko571/mt5-small-translation-ja_zh<br>
better in most aspects, more like a base model with pure data<br>
数值上更好,是用更纯的数据跑的<br>
includes colab notebook<br>
已经配置了colab的notebook,可以直接测试翻译,不需要安装<br>
# Release Notes
* this model is finetuned from mt5-small
* will use about 1.5G vram, fp16 will be less than 1G(if batch size is small), cpu inference speed is ok anyway
* used a trimmed piece of pontoon dataset that features ja to zh translate part
* also scrambled bunch of the translation from mt5-translation-ja_zh-game-v0.1, which is a large amount of junk for training
* reason for making this model<br>
Testing the ideas of using pontoon dataset<br>
Constructing a flexible translation evaluation standard, need a poor performance model to compare
# 模型公开声明
* 这个模型由 mt5-translation-ja_zh 继续训练得来
* 使用大于1.5g的显存,fp16载入会小于1G显存(batch拉高会大于1G),使用cpu运作速度也还可以
* 制作这个模型的原因<br>
尝试使用现有的模型精调,小模型训练速度奇快<br>
* 本模型缺陷<br>
本身就是用来做测试的,虽然使用的显存很低,但翻译能力奇差<br>
# 简单的后端应用
还没稳定调试,慎用
* https://github.com/IryNeko/RabbitCafe
# A more precise example using it
# 使用指南
```python
from transformers import pipeline
model_name="iryneko571/mt5-translation-ja_zh-game-small"
#pipe = pipeline("translation",model=model_name,tokenizer=model_name,repetition_penalty=1.4,batch_size=1,max_length=256)
pipe = pipeline("translation",
model=model_name,
repetition_penalty=1.4,
batch_size=1,
max_length=256
)
def translate_batch(batch, language='<-ja2zh->'): # batch is an array of string
i=0 # quickly format the list
while i<len(batch):
batch[i]=f'{language} {batch[i]}'
i+=1
translated=pipe(batch)
result=[]
i=0
while i<len(translated):
result.append(translated[i]['translation_text'])
i+=1
return result
inputs=[]
print(translate_batch(inputs))
```
# Roadmap
* Scamble more translation results from gpt4o, gpt3.5, claude, mt5 and other sources to make a more messy input
* increase translation accuracy
* apply lora on it and apply int8 inference to further decrease hardware requirements
* create onnx and ncnn model
# how to find me
# 找到作者
Discord Server:<br>
https://discord.gg/JmjPmJjA<br>
If you need any help, a test server or just want to chat<br>
如果需要帮助,需要试试最新的版本,或者只是为了看下我是啥,可以进channel看看(这边允许发布这个吗?)<br>
|
{"datasets": ["ayymen/Pontoon-Translations"], "language": ["ja", "zh"], "license": "mit", "pipeline_tag": "translation", "widget": [{"text": "<-ja2zh-> フェルディナント・ラッサール \\n は、プロイセンの政治学者、哲学者、法学者、社会主義者、労働運動指導者。ドイツ社会民主党の母体となる全ドイツ労働者同盟の創設者である。社会主義共和政の統一ドイツを目指しつつも、……"}], "inference": {"parameters": {"repetition_penalty": 1.4}}}
|
task
|
[
"TRANSLATION"
] | 46,773 |
aandyluna/mt5-small-finetuned-amazon-en-es
|
aandyluna
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-04T04:07:34Z |
2024-12-04T05:55:17+00:00
| 42 | 0 |
---
base_model: google/mt5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0193
- Rouge1: 17.0896
- Rouge2: 8.362
- Rougel: 16.735
- Rougelsum: 16.8131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.6768 | 1.0 | 1209 | 3.2182 | 17.7059 | 9.3629 | 17.1633 | 17.2774 |
| 3.6447 | 2.0 | 2418 | 3.1029 | 17.4241 | 8.8479 | 16.9706 | 16.9578 |
| 3.4304 | 3.0 | 3627 | 3.0759 | 15.8371 | 7.5702 | 15.2312 | 15.3302 |
| 3.3128 | 4.0 | 4836 | 3.0706 | 16.9745 | 8.7666 | 16.559 | 16.6638 |
| 3.2203 | 5.0 | 6045 | 3.0339 | 16.3788 | 7.769 | 15.9624 | 16.027 |
| 3.1651 | 6.0 | 7254 | 3.0283 | 16.4083 | 8.0507 | 15.9778 | 16.1114 |
| 3.1387 | 7.0 | 8463 | 3.0188 | 16.6289 | 8.2229 | 16.3528 | 16.3952 |
| 3.1139 | 8.0 | 9672 | 3.0193 | 17.0896 | 8.362 | 16.735 | 16.8131 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0193
- Rouge1: 17.0896
- Rouge2: 8.362
- Rougel: 16.735
- Rougelsum: 16.8131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.6768 | 1.0 | 1209 | 3.2182 | 17.7059 | 9.3629 | 17.1633 | 17.2774 |
| 3.6447 | 2.0 | 2418 | 3.1029 | 17.4241 | 8.8479 | 16.9706 | 16.9578 |
| 3.4304 | 3.0 | 3627 | 3.0759 | 15.8371 | 7.5702 | 15.2312 | 15.3302 |
| 3.3128 | 4.0 | 4836 | 3.0706 | 16.9745 | 8.7666 | 16.559 | 16.6638 |
| 3.2203 | 5.0 | 6045 | 3.0339 | 16.3788 | 7.769 | 15.9624 | 16.027 |
| 3.1651 | 6.0 | 7254 | 3.0283 | 16.4083 | 8.0507 | 15.9778 | 16.1114 |
| 3.1387 | 7.0 | 8463 | 3.0188 | 16.6289 | 8.2229 | 16.3528 | 16.3952 |
| 3.1139 | 8.0 | 9672 | 3.0193 | 17.0896 | 8.362 | 16.735 | 16.8131 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
{"base_model": "google/mt5-small", "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 46,774 |
ocm/distilbert-base-uncased-finetuned-emotion
|
ocm
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-10-29T11:15:47Z |
2022-11-05T17:45:19+00:00
| 10 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.935
name: Accuracy
- type: f1
value: 0.9351083637430424
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.935
- F1: 0.9351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7703 | 1.0 | 250 | 0.2588 | 0.918 | 0.9165 |
| 0.2031 | 2.0 | 500 | 0.1773 | 0.928 | 0.9282 |
| 0.1385 | 3.0 | 750 | 0.1593 | 0.934 | 0.9342 |
| 0.1101 | 4.0 | 1000 | 0.1582 | 0.935 | 0.9351 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.935
- F1: 0.9351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7703 | 1.0 | 250 | 0.2588 | 0.918 | 0.9165 |
| 0.2031 | 2.0 | 500 | 0.1773 | 0.928 | 0.9282 |
| 0.1385 | 3.0 | 750 | 0.1593 | 0.934 | 0.9342 |
| 0.1101 | 4.0 | 1000 | 0.1582 | 0.935 | 0.9351 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.935, "name": "Accuracy"}, {"type": "f1", "value": 0.9351083637430424, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,775 |
Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot_chat_few_shot_remove_final_evaluation_e1_one_o
|
Netta1994
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"region:us"
] | 2024-09-23T13:02:44Z |
2024-09-23T13:03:20+00:00
| 7 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Reasoning:
The answer is detailed, specific, and accurately reflects the information provided
in the document. It directly addresses the steps necessary to change the reservation
reference from the service page to the booking calendar.
Evaluation:'
- text: 'Reasoning:
The provided answer describes the process of blocking off time in the calendar
to prevent customers from booking slots during those times. However, the question
specifically asks about removing the time from showing on the booking button,
not just blocking off time. The answer does not address the correct query and
misinterprets the request.
Evaluation:'
- text: 'Reasoning:
The provided answer is broadly accurate but lacks the direct mention of the error
message "You do not have access to Email," which is present in the document. It
also misses the context provided in the document to directly address the user''s
question, and didn''t include verifying the domain as part of enabling the calendar
scheduling and recording.
Evaluation:'
- text: 'Reasoning:
The provided answer is clear and instructive, reflecting the instructions in the
document precisely. It includes all necessary steps, matches the information from
the document, and even addresses prerequisites like having a premium plan and
a connected domain.
Evaluation:'
- text: 'Reasoning:
The answer provided here is accurate and aligns well with the details found in
the document. It outlines all the necessary steps and prerequisites, such as upgrading
to a business & ecommerce premium plan, and correctly explains the process within
the context of Editor X. It also offers relevant additional information about
the visibility of service list pages and member pages, ensuringa comprehensive
response.
Final Evaluation:'
inference: true
model-index:
- name: SetFit with BAAI/bge-base-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7291666666666666
name: Accuracy
---
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'Reasoning:\nThe answer directly contradicts the correct storing methods detailed in the document. The answer contains advice that could damage jewelry, such as storing it in high humidity areas and keeping diamonds together.\n\nEvaluation:'</li><li>'Reasoning:\nContradiction - The document clearly states that Chopin met Felix Mendelssohn at the music festival in 1834, not Ludwig van Beethoven.\n\nEvaluation:'</li><li>'Reasoning:\nincomplete - The answer is not relevant to what is being asked, it provides information unrelated to the Angel & Faith Season Ten comic book series.\nEvaluation:'</li></ul> |
| 1 | <ul><li>'Reasoning:\nThe answer efficiently captures the main character from the book "Chase In Shadow (Johnnies #1)" and accurately describes the dual aspects of his life, with information directly supported by the document.\nEvaluation:'</li><li>'Reasoning:\nfactual error - The answer includes a factual error that directly contradicts the information available in the document.\nEvaluation:'</li><li>'Reasoning:\nThe answer correctly identifies the main statement of the Equal Rights Amendment and aligns with the content provided in the document.\n\nEvaluation:'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7292 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot_chat_few_shot_remove_final_evaluation_e1_one_o")
# Run inference
preds = model("Reasoning:
The answer is detailed, specific, and accurately reflects the information provided in the document. It directly addresses the steps necessary to change the reservation reference from the service page to the booking calendar.
Evaluation:")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 37.6205 | 156 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 304 |
| 1 | 339 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0006 | 1 | 0.2284 | - |
| 0.0311 | 50 | 0.2525 | - |
| 0.0622 | 100 | 0.2453 | - |
| 0.0933 | 150 | 0.2317 | - |
| 0.1244 | 200 | 0.2263 | - |
| 0.1555 | 250 | 0.2167 | - |
| 0.1866 | 300 | 0.1779 | - |
| 0.2177 | 350 | 0.1659 | - |
| 0.2488 | 400 | 0.1149 | - |
| 0.2799 | 450 | 0.0699 | - |
| 0.3109 | 500 | 0.0595 | - |
| 0.3420 | 550 | 0.0472 | - |
| 0.3731 | 600 | 0.0429 | - |
| 0.4042 | 650 | 0.0343 | - |
| 0.4353 | 700 | 0.0242 | - |
| 0.4664 | 750 | 0.0201 | - |
| 0.4975 | 800 | 0.0137 | - |
| 0.5286 | 850 | 0.0123 | - |
| 0.5597 | 900 | 0.0148 | - |
| 0.5908 | 950 | 0.0119 | - |
| 0.6219 | 1000 | 0.011 | - |
| 0.6530 | 1050 | 0.0129 | - |
| 0.6841 | 1100 | 0.0108 | - |
| 0.7152 | 1150 | 0.0082 | - |
| 0.7463 | 1200 | 0.0131 | - |
| 0.7774 | 1250 | 0.0105 | - |
| 0.8085 | 1300 | 0.0087 | - |
| 0.8396 | 1350 | 0.0097 | - |
| 0.8706 | 1400 | 0.011 | - |
| 0.9017 | 1450 | 0.0056 | - |
| 0.9328 | 1500 | 0.0109 | - |
| 0.9639 | 1550 | 0.0076 | - |
| 0.9950 | 1600 | 0.009 | - |
### Framework Versions
- Python: 3.10.14
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.44.0
- PyTorch: 2.4.0+cu121
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'Reasoning:\nThe answer directly contradicts the correct storing methods detailed in the document. The answer contains advice that could damage jewelry, such as storing it in high humidity areas and keeping diamonds together.\n\nEvaluation:'</li><li>'Reasoning:\nContradiction - The document clearly states that Chopin met Felix Mendelssohn at the music festival in 1834, not Ludwig van Beethoven.\n\nEvaluation:'</li><li>'Reasoning:\nincomplete - The answer is not relevant to what is being asked, it provides information unrelated to the Angel & Faith Season Ten comic book series.\nEvaluation:'</li></ul> |
| 1 | <ul><li>'Reasoning:\nThe answer efficiently captures the main character from the book "Chase In Shadow (Johnnies #1)" and accurately describes the dual aspects of his life, with information directly supported by the document.\nEvaluation:'</li><li>'Reasoning:\nfactual error - The answer includes a factual error that directly contradicts the information available in the document.\nEvaluation:'</li><li>'Reasoning:\nThe answer correctly identifies the main statement of the Equal Rights Amendment and aligns with the content provided in the document.\n\nEvaluation:'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7292 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot_chat_few_shot_remove_final_evaluation_e1_one_o")
# Run inference
preds = model("Reasoning:
The answer is detailed, specific, and accurately reflects the information provided in the document. It directly addresses the steps necessary to change the reservation reference from the service page to the booking calendar.
Evaluation:")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 37.6205 | 156 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 304 |
| 1 | 339 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0006 | 1 | 0.2284 | - |
| 0.0311 | 50 | 0.2525 | - |
| 0.0622 | 100 | 0.2453 | - |
| 0.0933 | 150 | 0.2317 | - |
| 0.1244 | 200 | 0.2263 | - |
| 0.1555 | 250 | 0.2167 | - |
| 0.1866 | 300 | 0.1779 | - |
| 0.2177 | 350 | 0.1659 | - |
| 0.2488 | 400 | 0.1149 | - |
| 0.2799 | 450 | 0.0699 | - |
| 0.3109 | 500 | 0.0595 | - |
| 0.3420 | 550 | 0.0472 | - |
| 0.3731 | 600 | 0.0429 | - |
| 0.4042 | 650 | 0.0343 | - |
| 0.4353 | 700 | 0.0242 | - |
| 0.4664 | 750 | 0.0201 | - |
| 0.4975 | 800 | 0.0137 | - |
| 0.5286 | 850 | 0.0123 | - |
| 0.5597 | 900 | 0.0148 | - |
| 0.5908 | 950 | 0.0119 | - |
| 0.6219 | 1000 | 0.011 | - |
| 0.6530 | 1050 | 0.0129 | - |
| 0.6841 | 1100 | 0.0108 | - |
| 0.7152 | 1150 | 0.0082 | - |
| 0.7463 | 1200 | 0.0131 | - |
| 0.7774 | 1250 | 0.0105 | - |
| 0.8085 | 1300 | 0.0087 | - |
| 0.8396 | 1350 | 0.0097 | - |
| 0.8706 | 1400 | 0.011 | - |
| 0.9017 | 1450 | 0.0056 | - |
| 0.9328 | 1500 | 0.0109 | - |
| 0.9639 | 1550 | 0.0076 | - |
| 0.9950 | 1600 | 0.009 | - |
### Framework Versions
- Python: 3.10.14
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.44.0
- PyTorch: 2.4.0+cu121
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Reasoning:\nThe answer is detailed, specific, and accurately reflects the information provided in the document. It directly addresses the steps necessary to change the reservation reference from the service page to the booking calendar.\n\nEvaluation:"}, {"text": "Reasoning:\nThe provided answer describes the process of blocking off time in the calendar to prevent customers from booking slots during those times. However, the question specifically asks about removing the time from showing on the booking button, not just blocking off time. The answer does not address the correct query and misinterprets the request.\n\nEvaluation:"}, {"text": "Reasoning:\nThe provided answer is broadly accurate but lacks the direct mention of the error message \"You do not have access to Email,\" which is present in the document. It also misses the context provided in the document to directly address the user's question, and didn't include verifying the domain as part of enabling the calendar scheduling and recording.\n\nEvaluation:"}, {"text": "Reasoning:\nThe provided answer is clear and instructive, reflecting the instructions in the document precisely. It includes all necessary steps, matches the information from the document, and even addresses prerequisites like having a premium plan and a connected domain.\n\nEvaluation:"}, {"text": "Reasoning:\nThe answer provided here is accurate and aligns well with the details found in the document. It outlines all the necessary steps and prerequisites, such as upgrading to a business & ecommerce premium plan, and correctly explains the process within the context of Editor X. It also offers relevant additional information about the visibility of service list pages and member pages, ensuringa comprehensive response.\n\nFinal Evaluation:"}], "inference": true, "model-index": [{"name": "SetFit with BAAI/bge-base-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.7291666666666666, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,776 |
tamarab/bert-emotion
|
tamarab
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-05-20T16:45:12Z |
2022-05-20T19:12:14+00:00
| 116 | 0 |
---
datasets:
- tweet_eval
license: apache-2.0
metrics:
- precision
- recall
tags:
- generated_from_trainer
model-index:
- name: bert-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- type: precision
value: 0.7462955517135084
name: Precision
- type: recall
value: 0.7095634380533169
name: Recall
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1347
- Precision: 0.7463
- Recall: 0.7096
- Fscore: 0.7209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8385 | 1.0 | 815 | 0.8366 | 0.7865 | 0.5968 | 0.6014 |
| 0.5451 | 2.0 | 1630 | 0.9301 | 0.7301 | 0.6826 | 0.6947 |
| 0.2447 | 3.0 | 2445 | 1.1347 | 0.7463 | 0.7096 | 0.7209 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1347
- Precision: 0.7463
- Recall: 0.7096
- Fscore: 0.7209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8385 | 1.0 | 815 | 0.8366 | 0.7865 | 0.5968 | 0.6014 |
| 0.5451 | 2.0 | 1630 | 0.9301 | 0.7301 | 0.6826 | 0.6947 |
| 0.2447 | 3.0 | 2445 | 1.1347 | 0.7463 | 0.7096 | 0.7209 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
{"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["precision", "recall"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "precision", "value": 0.7462955517135084, "name": "Precision"}, {"type": "recall", "value": 0.7095634380533169, "name": "Recall"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,777 |
Netta1994/setfit_baai_gpt-4o_cot-few_shot_remove_final_evaluation_e1_one_big_model_1727080822.0
|
Netta1994
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"region:us"
] | 2024-09-23T08:40:22Z |
2024-09-23T08:40:53+00:00
| 7 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'The provided answer is overall accurate, complete, and relevant to the query
about performing a male manicure. The steps, including soaking hands, scrubbing
nails, clipping nails, applying cuticle remover, pushing back cuticles, smoothing
edges with a file, and moisturizing, are all appropriately mentioned with detailed
instructions. The answer aligns well with the informationprovided in the document.
Final evaluation:'
- text: 'The answer provided discusses Kieron Freeman and his time with Notts County,
specifically mentioning that Martin Allen signed him when he went on loan there.
However, the question is about Aaron Pryor''s manager during his boxing career,
which is completely unrelated to the context provided in the answer and the document.
Final evaluation:'
- text: 'The provided answer states that "The concern regarding the usage of online
casinos is the risk of user data being compromised." However, this response is
irrelevant to the question asking about the concern of the husband of the person
who wrote the message on July 10, 2011, which completely mismatches the context
provided in the document.
Considering that the evaluation focuses on the accuracy and relevance of the provided
answer based on the provided question and document:
The final evaluation:'
- text: 'Evaluation:
The answer provided is completely unrelated to the question asked about painting
countertops. The answer discusses how to meet a crush for the first time, which
is not relevant to painting countertops.
Final evaluation:'
- text: 'The answer provided accurately states that Allan Cox''s First Class Delivery
was launched on a H128-10W for his Level 1 certification flight. This information
is directly retrieved from the document.
The final evaluation:'
inference: true
---
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>"The answer incorporates several elements not mentioned in the provided document, specifically the references to a virtual reality training technique and its impact on player decision-making. These aspects are not mentioned in the document, rendering the information inaccurate.\n\nIn the actual document, the offensive outburst of the Nuggets is attributed to coach Brian Shaw's strategy of encouraging players to take the first available shot in the rhythm of the offense and push the ball after makes and misses. The comfort and effectiveness in these strategies coming together are cited as reasons for the increased scoring.\n\nTherefore, the provided answer is flawed due to the inclusionof fabricated details.\n\nThe final evaluation:"</li><li>'The answer provided contains several inaccuracies and fabrications that do not align with the content of the document.\n\n1. **Film Under-Exposure Statement**: The answer erroneously states that "film under-exposes better than a digital sensor," whereas the document clearly mentions that "film over-exposes better than a digital sensor."\n\n2. **Color Compression Errors**: The answer claims film compresses exposure range into the "bottom end" and colors saturate to black, but the document specifies it compresses into the "top end" and colors desaturate to white.\n\n3. **Sensor Details**: The answer inaccurately mentions that digital sensors capture all three colors at each point when in reality it is stated that "Film also captures all three colors at every point. Digital sensors (all but Fovian, anyway) capture only one color at each point and then interpolate between them."\n\n4. **Megapixel Comparison**: The claim that the author finds "5MP digital sensors of today to be about comparable to high-end, professional film" is incorrect. The document actually compares "10MP digital sensors of today" to common, non-professional film for resolution.\n\nGiven these significant discrepancies and inaccuracies, the answer provided is unreliable and does not accurately reflect the document\'s content.\n\nThe final evaluation:'</li><li>'The provided answer addresses an entirely different topic—providing details about fighters and outcomes from a mixed martial arts event rather than discussing the main conflict in the third book of the Arcana Chronicles by Kresley Cole. The answer did not address the question at all. \n\nFinal evaluation:'</li></ul> |
| 1 | <ul><li>"The answer provided addresses the key elements that align with the best practices outlined in the document:\n\n1. **Getting to Know the Client**: The answer mentions understanding the client's needs, wants, and goals before starting the web design process, which is directly echoed in the document.\n\n2. **Signing a Contract**: The answer highlights the importance of having a detailed contract that outlines the scope of the project, costs, and how future revisions will be managed. This ensures that there are clear parameters and a point of reference if excessive requests arise.\n\n3. **Honesty and Diplomacy**: The answer advises showcasing a sense of honesty and diplomacy, particularly when extra charges are necessary or when certain requests are unfeasible. This aligns with the document's advice on effective communication and managing client expectations diplomatically.\n\nOverall, the answer aligns well with the recommendations provided in the document.\n\nThe final evaluation:"</li><li>"The answer provided is accurate and aligns well with the content of the document. The document discusses the importance of drawing on an author's own emotional experiences, particularly pain and emotion, to create genuine and relatable characters. This approach helps forge a connection between the reader and the characters.\n\nFinal evaluation:"</li><li>'The answer is directly substantiated by the document. It clearly mentions that Mauro Rubin, the CEO of JoinPad, was present at the event at Talent Garden Calabiana, Milan. The answer is concise and provides the exact information asked in the question without any extraneous details. \n\nFinal evaluation:'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_gpt-4o_cot-few_shot_remove_final_evaluation_e1_one_big_model_1727080822.0")
# Run inference
preds = model("The answer provided accurately states that Allan Cox's First Class Delivery was launched on a H128-10W for his Level 1 certification flight. This information is directly retrieved from the document.
The final evaluation:")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 12 | 75.0147 | 301 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 199 |
| 1 | 209 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0010 | 1 | 0.2249 | - |
| 0.0490 | 50 | 0.2456 | - |
| 0.0980 | 100 | 0.1748 | - |
| 0.1471 | 150 | 0.0861 | - |
| 0.1961 | 200 | 0.051 | - |
| 0.2451 | 250 | 0.0613 | - |
| 0.2941 | 300 | 0.0325 | - |
| 0.3431 | 350 | 0.0128 | - |
| 0.3922 | 400 | 0.0075 | - |
| 0.4412 | 450 | 0.007 | - |
| 0.4902 | 500 | 0.004 | - |
| 0.5392 | 550 | 0.0027 | - |
| 0.5882 | 600 | 0.0023 | - |
| 0.6373 | 650 | 0.0019 | - |
| 0.6863 | 700 | 0.0018 | - |
| 0.7353 | 750 | 0.0017 | - |
| 0.7843 | 800 | 0.0017 | - |
| 0.8333 | 850 | 0.0016 | - |
| 0.8824 | 900 | 0.0016 | - |
| 0.9314 | 950 | 0.0015 | - |
| 0.9804 | 1000 | 0.0014 | - |
### Framework Versions
- Python: 3.10.14
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.44.0
- PyTorch: 2.4.0+cu121
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>"The answer incorporates several elements not mentioned in the provided document, specifically the references to a virtual reality training technique and its impact on player decision-making. These aspects are not mentioned in the document, rendering the information inaccurate.\n\nIn the actual document, the offensive outburst of the Nuggets is attributed to coach Brian Shaw's strategy of encouraging players to take the first available shot in the rhythm of the offense and push the ball after makes and misses. The comfort and effectiveness in these strategies coming together are cited as reasons for the increased scoring.\n\nTherefore, the provided answer is flawed due to the inclusionof fabricated details.\n\nThe final evaluation:"</li><li>'The answer provided contains several inaccuracies and fabrications that do not align with the content of the document.\n\n1. **Film Under-Exposure Statement**: The answer erroneously states that "film under-exposes better than a digital sensor," whereas the document clearly mentions that "film over-exposes better than a digital sensor."\n\n2. **Color Compression Errors**: The answer claims film compresses exposure range into the "bottom end" and colors saturate to black, but the document specifies it compresses into the "top end" and colors desaturate to white.\n\n3. **Sensor Details**: The answer inaccurately mentions that digital sensors capture all three colors at each point when in reality it is stated that "Film also captures all three colors at every point. Digital sensors (all but Fovian, anyway) capture only one color at each point and then interpolate between them."\n\n4. **Megapixel Comparison**: The claim that the author finds "5MP digital sensors of today to be about comparable to high-end, professional film" is incorrect. The document actually compares "10MP digital sensors of today" to common, non-professional film for resolution.\n\nGiven these significant discrepancies and inaccuracies, the answer provided is unreliable and does not accurately reflect the document\'s content.\n\nThe final evaluation:'</li><li>'The provided answer addresses an entirely different topic—providing details about fighters and outcomes from a mixed martial arts event rather than discussing the main conflict in the third book of the Arcana Chronicles by Kresley Cole. The answer did not address the question at all. \n\nFinal evaluation:'</li></ul> |
| 1 | <ul><li>"The answer provided addresses the key elements that align with the best practices outlined in the document:\n\n1. **Getting to Know the Client**: The answer mentions understanding the client's needs, wants, and goals before starting the web design process, which is directly echoed in the document.\n\n2. **Signing a Contract**: The answer highlights the importance of having a detailed contract that outlines the scope of the project, costs, and how future revisions will be managed. This ensures that there are clear parameters and a point of reference if excessive requests arise.\n\n3. **Honesty and Diplomacy**: The answer advises showcasing a sense of honesty and diplomacy, particularly when extra charges are necessary or when certain requests are unfeasible. This aligns with the document's advice on effective communication and managing client expectations diplomatically.\n\nOverall, the answer aligns well with the recommendations provided in the document.\n\nThe final evaluation:"</li><li>"The answer provided is accurate and aligns well with the content of the document. The document discusses the importance of drawing on an author's own emotional experiences, particularly pain and emotion, to create genuine and relatable characters. This approach helps forge a connection between the reader and the characters.\n\nFinal evaluation:"</li><li>'The answer is directly substantiated by the document. It clearly mentions that Mauro Rubin, the CEO of JoinPad, was present at the event at Talent Garden Calabiana, Milan. The answer is concise and provides the exact information asked in the question without any extraneous details. \n\nFinal evaluation:'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_gpt-4o_cot-few_shot_remove_final_evaluation_e1_one_big_model_1727080822.0")
# Run inference
preds = model("The answer provided accurately states that Allan Cox's First Class Delivery was launched on a H128-10W for his Level 1 certification flight. This information is directly retrieved from the document.
The final evaluation:")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 12 | 75.0147 | 301 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 199 |
| 1 | 209 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0010 | 1 | 0.2249 | - |
| 0.0490 | 50 | 0.2456 | - |
| 0.0980 | 100 | 0.1748 | - |
| 0.1471 | 150 | 0.0861 | - |
| 0.1961 | 200 | 0.051 | - |
| 0.2451 | 250 | 0.0613 | - |
| 0.2941 | 300 | 0.0325 | - |
| 0.3431 | 350 | 0.0128 | - |
| 0.3922 | 400 | 0.0075 | - |
| 0.4412 | 450 | 0.007 | - |
| 0.4902 | 500 | 0.004 | - |
| 0.5392 | 550 | 0.0027 | - |
| 0.5882 | 600 | 0.0023 | - |
| 0.6373 | 650 | 0.0019 | - |
| 0.6863 | 700 | 0.0018 | - |
| 0.7353 | 750 | 0.0017 | - |
| 0.7843 | 800 | 0.0017 | - |
| 0.8333 | 850 | 0.0016 | - |
| 0.8824 | 900 | 0.0016 | - |
| 0.9314 | 950 | 0.0015 | - |
| 0.9804 | 1000 | 0.0014 | - |
### Framework Versions
- Python: 3.10.14
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.44.0
- PyTorch: 2.4.0+cu121
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "The provided answer is overall accurate, complete, and relevant to the query about performing a male manicure. The steps, including soaking hands, scrubbing nails, clipping nails, applying cuticle remover, pushing back cuticles, smoothing edges with a file, and moisturizing, are all appropriately mentioned with detailed instructions. The answer aligns well with the informationprovided in the document.\n\nFinal evaluation:"}, {"text": "The answer provided discusses Kieron Freeman and his time with Notts County, specifically mentioning that Martin Allen signed him when he went on loan there. However, the question is about Aaron Pryor's manager during his boxing career, which is completely unrelated to the context provided in the answer and the document.\n\nFinal evaluation:"}, {"text": "The provided answer states that \"The concern regarding the usage of online casinos is the risk of user data being compromised.\" However, this response is irrelevant to the question asking about the concern of the husband of the person who wrote the message on July 10, 2011, which completely mismatches the context provided in the document.\n\nConsidering that the evaluation focuses on the accuracy and relevance of the provided answer based on the provided question and document:\nThe final evaluation:"}, {"text": "Evaluation:\nThe answer provided is completely unrelated to the question asked about painting countertops. The answer discusses how to meet a crush for the first time, which is not relevant to painting countertops.\n\nFinal evaluation:"}, {"text": "The answer provided accurately states that Allan Cox's First Class Delivery was launched on a H128-10W for his Level 1 certification flight. This information is directly retrieved from the document.\n\nThe final evaluation:"}], "inference": true}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,778 |
pkaustubh4/QnA_BERT
|
pkaustubh4
|
question-answering
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-08-16T13:38:04Z |
2023-08-16T20:43:31+00:00
| 10 | 0 |
---
datasets:
- squad
language:
- en
license: mit
---
# Question Answering with DistilBERT README
This repository contains code to train a Question Answering model using the DistilBERT architecture on the SQuAD (Stanford Question Answering Dataset) dataset. The model is trained to answer questions based on a given context paragraph. The training process utilizes PyTorch, the Hugging Face transformers library, and the datasets library.
## Prerequisites
Before running the code, make sure you have the following installed:
NVIDIA GPU (for faster training, optional but recommended)
NVIDIA CUDA Toolkit (if using GPU)
Python 3.x
Jupyter Notebook or another Python environment
## Installation
You can set up your environment by running the following commands:
```bash
!nvidia-smi # Check GPU availability
!pip install -q transformers datasets torch tqdm
```
## Usage
- Loading and Preprocessing Data: The code loads the SQuAD dataset and selects a subset for training. You can adjust the subset_size variable to control the size of the subset.
- Tokenization and Dataset Creation: The QADataset class is defined to preprocess and tokenize the data for training. It converts question and context pairs into tokenized format suitable for DistilBERT input. It also prepares the start and end positions for the answers in the context.
- Model Configuration: The model is based on the DistilBERT architecture, specifically the "distilbert-base-cased" version.
- Training Loop: The code sets up a training loop for a specified number of epochs. It trains the model to predict the start and end positions of the answer span in the context paragraph.
- Saving the Model: The final trained model is saved to a specified directory in Google Drive. You can adjust the final_model_output_dir variable to change the save location.
## Training
To train the model, follow these steps:
- Run the provided code cells in a Jupyter Notebook or Python environment.
- The code will load the dataset, tokenize it, and set up the training loop.
- The model's training progress will be displayed using a progress bar.
- After training completes, the final trained model will be saved to the specified directory in Google Drive.
## Notes
- This code assumes you are using Google Colab to access the Google Drive API for saving the model. If you're using a different environment, you might need to adjust the saving mechanism.
- Make sure you have sufficient space in your Google Drive to save the model.
- You can modify hyperparameters such as batch size, learning rate, and the number of epochs to experiment with different training settings.
## Credits
- The code in this repository is based on the Hugging Face Transformers library and the SQuAD dataset.
- [DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)
- [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/)
## License
This code is provided under the MIT License. Feel free to modify and use it as needed.
| null |
Non_BioNLP
|
# Question Answering with DistilBERT README
This repository contains code to train a Question Answering model using the DistilBERT architecture on the SQuAD (Stanford Question Answering Dataset) dataset. The model is trained to answer questions based on a given context paragraph. The training process utilizes PyTorch, the Hugging Face transformers library, and the datasets library.
## Prerequisites
Before running the code, make sure you have the following installed:
NVIDIA GPU (for faster training, optional but recommended)
NVIDIA CUDA Toolkit (if using GPU)
Python 3.x
Jupyter Notebook or another Python environment
## Installation
You can set up your environment by running the following commands:
```bash
!nvidia-smi # Check GPU availability
!pip install -q transformers datasets torch tqdm
```
## Usage
- Loading and Preprocessing Data: The code loads the SQuAD dataset and selects a subset for training. You can adjust the subset_size variable to control the size of the subset.
- Tokenization and Dataset Creation: The QADataset class is defined to preprocess and tokenize the data for training. It converts question and context pairs into tokenized format suitable for DistilBERT input. It also prepares the start and end positions for the answers in the context.
- Model Configuration: The model is based on the DistilBERT architecture, specifically the "distilbert-base-cased" version.
- Training Loop: The code sets up a training loop for a specified number of epochs. It trains the model to predict the start and end positions of the answer span in the context paragraph.
- Saving the Model: The final trained model is saved to a specified directory in Google Drive. You can adjust the final_model_output_dir variable to change the save location.
## Training
To train the model, follow these steps:
- Run the provided code cells in a Jupyter Notebook or Python environment.
- The code will load the dataset, tokenize it, and set up the training loop.
- The model's training progress will be displayed using a progress bar.
- After training completes, the final trained model will be saved to the specified directory in Google Drive.
## Notes
- This code assumes you are using Google Colab to access the Google Drive API for saving the model. If you're using a different environment, you might need to adjust the saving mechanism.
- Make sure you have sufficient space in your Google Drive to save the model.
- You can modify hyperparameters such as batch size, learning rate, and the number of epochs to experiment with different training settings.
## Credits
- The code in this repository is based on the Hugging Face Transformers library and the SQuAD dataset.
- [DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)
- [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/)
## License
This code is provided under the MIT License. Feel free to modify and use it as needed.
|
{"datasets": ["squad"], "language": ["en"], "license": "mit"}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,779 |
fcogidi/pegasus-arxiv
|
fcogidi
|
summarization
|
[
"transformers.js",
"onnx",
"pegasus",
"text2text-generation",
"summarization",
"en",
"region:us"
] | 2024-11-30T22:51:19Z |
2024-12-01T00:20:43+00:00
| 18 | 0 |
---
language:
- en
library_name: transformers.js
pipeline_tag: summarization
---
https://huggingface.co/google/pegasus-arxiv with ONNX weights compatible with Transformers.js.
**NOTE**: As of 2024-11-30 Transformers.js does not support `PegasusTokenizer`.
| null |
Non_BioNLP
| ERROR: type should be string, got "\nhttps://huggingface.co/google/pegasus-arxiv with ONNX weights compatible with Transformers.js.\n\n**NOTE**: As of 2024-11-30 Transformers.js does not support `PegasusTokenizer`." |
{"language": ["en"], "library_name": "transformers.js", "pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 46,780 |
wildgrape14/distilbert-base-uncased-finetuned-emotion
|
wildgrape14
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-10T11:57:40Z |
2023-08-10T11:57:57+00:00
| 8 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.925
name: Accuracy
- type: f1
value: 0.9249069634242804
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8142 | 1.0 | 250 | 0.3171 | 0.9095 | 0.9082 |
| 0.2524 | 2.0 | 500 | 0.2187 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8142 | 1.0 | 250 | 0.3171 | 0.9095 | 0.9082 |
| 0.2524 | 2.0 | 500 | 0.2187 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.925, "name": "Accuracy"}, {"type": "f1", "value": 0.9249069634242804, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,781 |
cvapict/yhi-message-type-all-MiniLM-L6-v2
|
cvapict
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-08-30T11:48:06Z |
2023-08-30T11:48:43+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# cvapict/yhi-message-type-all-MiniLM-L6-v2
{'accuracy': 0.8048780487804879}
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("cvapict/yhi-message-type-all-MiniLM-L6-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# cvapict/yhi-message-type-all-MiniLM-L6-v2
{'accuracy': 0.8048780487804879}
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("cvapict/yhi-message-type-all-MiniLM-L6-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,782 |
IMISLab/GreekT5-umt5-base-greeksum
|
IMISLab
|
summarization
|
[
"transformers",
"pytorch",
"umt5",
"text2text-generation",
"summarization",
"el",
"arxiv:2311.07767",
"arxiv:2304.00869",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-12T12:08:04Z |
2024-08-02T09:14:45+00:00
| 41 | 1 |
---
language:
- el
license: apache-2.0
metrics:
- bertscore
- rouge
pipeline_tag: summarization
widget:
- text: 'Να πάρει ""ξεκάθαρη"" θέση σε σχέση με τον κίνδυνο μετάδοσης του κορονοϊού
από τη Θεία Κοινωνία καλεί την κυβέρνηση και τον Πρωθυπουργό με ανακοίνωσή
του τη Δευτέρα ο ΣΥΡΙΖΑ. ""Την ώρα που κλείνουν προληπτικά και ορθώς σχολεία,
πανεπιστήμια, γήπεδα και λαμβάνονται ειδικά μέτρα ακόμη και για την ορκωμοσία
της νέας Προέδρου της Δημοκρατίας, η Ιερά Σύνοδος της Εκκλησίας της Ελλάδος
επιμένει ότι το μυστήριο της Θείας Κοινωνίας δεν εγκυμονεί κινδύνους μετάδοσης
του κορονοϊού, καλώντας όμως τις ευπαθείς ομάδες να μείνουν σπίτι τους"",
αναφέρει η αξιωματική αντιπολίτευση και συνεχίζει: ""Ωστόσο το πρόβλημα
δεν είναι τι λέει η Ιερά Σύνοδος, αλλά τι λέει η Πολιτεία και συγκεκριμένα
ο ΕΟΔΥ και το Υπουργείο Υγείας, που έχουν και την αποκλειστική κοινωνική
ευθύνη για τη μη εξάπλωση του ιού και την προστασία των πολιτών"". ""Σε άλλες
ευρωπαϊκές χώρες με εξίσου μεγάλο σεβασμό στη Χριστιανική πίστη και στο
θρησκευτικό συναίσθημα, τα μυστήρια της Εκκλησίας είτε αναστέλλονται είτε
τροποποιούν το τελετουργικό τους. Μόνο στη χώρα μας έχουμε το θλιβερό προνόμιο
μιας πολιτείας που δεν τολμά να πει το αυτονόητο"", προσθέτει, τονίζοντας
ότι ""η κυβέρνηση λοιπόν και το Υπουργείο Υγείας οφείλουν να πάρουν δημόσια
μια ξεκάθαρη θέση και να μην θυσιάζουν τη δημόσια Υγεία στο βωμό του πολιτικού
κόστους"". ""Συμφωνούν ότι η Θεία Κοινωνία δεν εγκυμονεί κινδύνους μετάδοσης
του κορονοϊού; Δεν είναι θέμα ευσέβειας αλλά κοινωνικής ευθύνης. Και με
τη Δημόσια υγεία δεν μπορούμε να παίζουμε"", καταλήγει η ανακοίνωση του
γραφείου Τύπου του ΣΥΡΙΖΑ. *ΠΩΣ ΜΕΤΑΔΙΔΕΤΑΙ. Χρήσιμος οδηγός για να προστατευθείτε
από τον κορονοϊό *ΤΑ ΝΟΣΟΚΟΜΕΙΑ ΑΝΑΦΟΡΑΣ. Ποια θα υποδέχονται τα κρούσματα
κορονοϊού στην Ελλάδα. *ΤΑΞΙΔΙΑ. Κορονοϊός και αεροδρόμια: Τι να προσέξετε.
*Η ΕΠΙΔΗΜΙΑ ΣΤΟΝ ΠΛΑΝΗΤΗ. Δείτε LIVE χάρτη με την εξέλιξη του κορονοϊού.'
example_title: Politics
- text: 'Με άρθρο της με τίτλο ""Επιστρέψτε στη θεά Ίριδα το σώμα της"", η εφημερίδα
Washington Post τάσσεται υπέρ της επιστροφής των γλυπτών του Παρθενώνα, στην
Αθήνα, στην κοιτίδα του δυτικού πολιτισμού, τώρα που οι συνθήκες έχουν
αλλάξει για την πάλαι ποτέ αυτοκρατορία της Αγγλίας. Αναφερόμενη στις διαφορετικές
απόψεις Ελλήνων και Βρετανών για τα γλυπτά, η συντάκτρια του άρθρου, τονίζει
ότι το αίτημα επιστροφής έχει αποκτήσει μεγαλύτερο βάρος τώρα που το Ηνωμένο
Βασίλειο εγκαταλείπει την Ευρωπαϊκή Ένωση. «Όταν ο Τόμας Μπρους, έβδομος
κόμης του Έλγιν, και 11ος κόμης του Κινκαρντίν, ταξίδεψε στην Ακρόπολη στις
αρχές της δεκαετίας του 1800, ως Βρετανός πρέσβης στην Οθωμανική Αυτοκρατορία,
ο Σουλτάνος λέγεται ότι του έδωσε την άδεια να ""αφαιρέσει μερικά τμήματα
λίθων με παλιές επιγραφές και μορφές"". Ο λόρδος το εξέλαβε ως άδεια να
αφαιρέσει, περίπου, 17 αγάλματα από τα αετώματα, 15 μετώπες, και 247 πόδια
(περίπου 75 μέτρα) της ζωφόρου από τον Παρθενώνα για να τα φέρει στην καλή
μας Αγγλία» αναφέρει στο άρθρο της η Washington Post. Και συνεχίζει λέγοντας
ότι «οι καιροί όμως άλλαξαν και αυτό που θεωρούνταν πιο δικαιολογημένο
τότε, σήμερα θεωρείται ευρέως ως μια ασυνείδητη πράξη». Σε μία έμμεση
αναφορά στο Brexit, και υπεραμυνόμενη της επιστροφής των γλυπτών στην Ελλάδα,
η συντάκτρια του άρθρου της Washington Post, διερωτάται: «Γιατί να παραμείνουν
τα μάρμαρα στη φύλαξη της χώρας που επιμένει ότι ανήκει μόνο στον εαυτό
της;» και σημειώνει: «Η Ελλάδα τιμάται σήμερα ως λίκνο του δυτικού πολιτισμού,
και ποιοί παρά οι Έλληνες θα μπορούσαν να στεγάσουν τον πολιτισμό αυτό;».'
example_title: Culture
- text: Το Διεθνές Νομισματικό Ταμείο (ΔΝΤ) προβλέπει ένα χρέος ρεκόρ των πλούσιων
χωρών το 2014 και κρίνει ""πιθανό"" να υπάρξει επιπλέον συμβολή των πιο
εύπορων προσώπων και των πολυεθνικών επιχειρήσεων σε μια μείωση των ελλειμμάτων,
σύμφωνα με έκθεσή του η οποία δόθηκε σήμερα στη δημοσιότητα. ""Φαίνεται
ότι υπάρχει ένα επαρκές περιθώριο σε πολλές ανεπτυγμένες χώρες για να
αντληθούν επιπλέον έσοδα από τα πιο υψηλά εισοδήματα"", υπογραμμίζει το
ΔΝΤ στην έκθεσή του για την δημοσιονομική επιτήρηση. Κατά μέσον όρο, το
δημόσιο χρέος των ανεπτυγμένων χωρών αναμένεται να φτάσει το ""ιστορικό
υψηλό"" του 110% του ΑΕΠ τους το 2014, δηλαδή θα βρίσκεται 35 μονάδες πιο
πάνω από το ποσοστό του 2007, επισημαίνει το ΔΝΤ στην έκθεσή του. Με μια
αναλογία χρέους/ΑΕΠ της τάξης του 242,3% που προβλέπεται να έχει το 2014,
η Ιαπωνία αναμένεται να βρίσκεται πρώτη στον κατάλογο των υπερχρεωμένων
ανεπτυγμένων χωρών, ακολουθούμενη από την Ελλάδα (174%), την Ιταλία (133,1%)
και την Πορτογαλία (125,3%). Οι ΗΠΑ, οι οποίες έχουν παραλύσει από ένα δημοσιονομικό
αδιέξοδο και απειλούνται από μια πιθανή στάση πληρωμών, θα δουν το χρέος
τους να ανεβαίνει στο 107,3% του ΑΕΠ τους το 2014, δηλαδή θα βρίσκονται πολύ
πιο μπροστά από την Γαλλία και το 94,8% στο οποίο αναμένεται ότι θα ανέρχεται
την ερχόμενη χρονιά το χρέος της. Η δεύτερη οικονομική δύναμη του κόσμου,
η Κίνα δίνει την εικόνα του καλού μαθητή με μια αναλογία χρέους/ΑΕΠ μόνον
20,9% την ερχόμενη χρονιά, σύμφωνα με το ΔΝΤ. ""Παρά τις προόδους στη μείωση
των ελλειμμάτων, οι δημοσιονομικές αδυναμίες παραμένουν βαθιές στις ανεπτυγμένες
χώρες"", επισημαίνεται στην έκθεση. Απέναντι σε αυτές τις ανισορροπίες,
το ΔΝΤ εκφράζει την ανησυχία του καθώς βλέπει ""ένα φορολογικό σύστημα
υπό πίεση"", το οποίο ευνοεί τον ανταγωνισμό μεταξύ των κρατών και επιτρέπει
στους εύπορους φορολογούμενους και στις πολυεθνικές να ελαφρύνουν τους φόρους
τους. Μόνον στις ΗΠΑ, το ΔΝΤ υπολογίζει σε 60 δισεκατομμύρια δολάρια τα έσοδα
που φέρεται ότι χάνονται λόγω τεχνικών βελτιστοποίησης της φορολογίας των
πολυεθνικών. Το ΔΝΤ επισημαίνει ότι οι τελευταίες δεκαετίες έχουν σηματοδοτηθεί
από μια ""θεαματική άνοδο"" του πλούτου του ""1%"" των πιο πλούσιων, κυρίως
στον αγγλοσαξονικό κόσμο, χωρίς ωστόσο η φορολογία να έχει προσαρμοστεί
σε αυτήν την εξέλιξη. ""Σε πολλές χώρες θα ήταν πιθανό να επιβληθούν επιπλέον
φόροι σε αυτούς που διαθέτουν τα πιο υψηλά εισοδήματα"", υπογραμμίζει το
ΔΝΤ, το οποίο κρίνει εξάλλου ""συνετό"" τον υπολογισμό σε 4.500 δισεκατομμύρια
δολάρια των διαθεσίμων που αποκρύπτονται από ιδιώτες σε φορολογικούς παραδείσους.
Οι χώρες της Ομάδας των Είκοσι (G20), οι υπουργοί Οικονομικών των οποίων
συναντώνται αυτήν την εβδομάδα στην Ουάσινγκτον, ξεκίνησαν πρόσφατα πρωτοβουλίες
για την πάταξη της φοροδιαφυγής.
example_title: Economics
model-index:
- name: IMISLab/GreekT5-umt5-base-greeksum
results:
- task:
type: summarization
name: Summarization
dataset:
name: GreekSUM
type: greeksum
config: default
split: test
metrics:
- type: rouge
value: 26.67
name: ROUGE-1
verified: true
- type: rouge
value: 13.0
name: ROUGE-2
verified: true
- type: rouge
value: 22.42
name: ROUGE-L
verified: true
- type: bertscore
value: 73.41
name: BERTScore
verified: true
---
# GreekT5 (umt5-base-greeksum)
A Greek news summarization model trained on [GreekSum](https://github.com/iakovosevdaimon/GreekSUM).
This model is part of a series of models trained as part of our research paper:
[Giarelis, N., Mastrokostas, C., & Karacapilidis, N. (2024) GreekT5: Sequence-to-Sequence Models for Greek News Summarization](https://link.springer.com/chapter/10.1007/978-3-031-63215-0_5) [\[arxiv\]](https://arxiv.org/abs/2311.07767)
The proposed models were trained and evaluated on the same dataset against [GreekBART](https://arxiv.org/abs/2304.00869).
For more information see the evaluation section below.
## Training dataset
The training dataset of `GreekT5-umt5-base-greeksum` is [GreekSum](https://github.com/iakovosevdaimon/GreekSUM/), which is the first news summarization dataset for the Greek Language.
This dataset contains ~151,000 news articles collected from [News24/7](https://www.news247.gr/), belonging to various topics (i.e., society, politics, economy, culture or world news).
For more information see: [https://arxiv.org/abs/2304.00869](https://arxiv.org/abs/2304.00869)
## Training configuration
We trained `google/umt5-base` [580 million parameters (~2.37 GB)] on the GreekSUM train split using the following parameters:
* GPU batch size = 1
* Total training epochs = 10
* AdamW optimizer (e = 1e−8, β1 = 0.9 and β2 = 0.0999)
* Learning rate = 3e−4
* No warmup steps
* 32-bit floating precision
* Tokenization
* maximum input token length = 1024
* maximum output token length = 128
* padding = ‘max_length’
* truncation = True
**Note:** T5-based models use a multi-task architecture, the prefix *‘summarize: ’* was prepended in each training sample.
## Evaluation
**Approach**|**ROUGE-1**|**ROUGE-2**|**ROUGE-L**|**BERTScore**
------------|-----------|-----------|-----------|-------------
TextRank|18.10|5.76|13.84|68.39
GreekT5 (mt5-small)|14.84|1.68|12.39|72.96
GreekT5 (umt5-small)|25.49|12.03|21.32|72.86
**GreekT5 (umt5-base)**|**26.67**|**13.00**|**22.42**|73.41
GreekBART|17.43|2.44|15.08|**75.89**
### Example code
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
model_name = 'IMISLab/GreekT5-umt5-base-greeksum'
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
summarizer = pipeline(
'summarization',
device = 'cpu',
model = model,
tokenizer = tokenizer,
max_new_tokens = 128,
truncation = True
)
text = 'Να πάρει ""ξεκάθαρη"" θέση σε σχέση με τον κίνδυνο μετάδοσης του κορονοϊού από τη Θεία Κοινωνία καλεί την κυβέρνηση και τον Πρωθυπουργό με ανακοίνωσή του τη Δευτέρα ο ΣΥΡΙΖΑ. ""Την ώρα που κλείνουν προληπτικά και ορθώς σχολεία, πανεπιστήμια, γήπεδα και λαμβάνονται ειδικά μέτρα ακόμη και για την ορκωμοσία της νέας Προέδρου της Δημοκρατίας, η Ιερά Σύνοδος της Εκκλησίας της Ελλάδος επιμένει ότι το μυστήριο της Θείας Κοινωνίας δεν εγκυμονεί κινδύνους μετάδοσης του κορονοϊού, καλώντας όμως τις ευπαθείς ομάδες να μείνουν σπίτι τους"", αναφέρει η αξιωματική αντιπολίτευση και συνεχίζει: ""Ωστόσο το πρόβλημα δεν είναι τι λέει η Ιερά Σύνοδος, αλλά τι λέει η Πολιτεία και συγκεκριμένα ο ΕΟΔΥ και το Υπουργείο Υγείας, που έχουν και την αποκλειστική κοινωνική ευθύνη για τη μη εξάπλωση του ιού και την προστασία των πολιτών"". ""Σε άλλες ευρωπαϊκές χώρες με εξίσου μεγάλο σεβασμό στη Χριστιανική πίστη και στο θρησκευτικό συναίσθημα, τα μυστήρια της Εκκλησίας είτε αναστέλλονται είτε τροποποιούν το τελετουργικό τους. Μόνο στη χώρα μας έχουμε το θλιβερό προνόμιο μιας πολιτείας που δεν τολμά να πει το αυτονόητο"", προσθέτει, τονίζοντας ότι ""η κυβέρνηση λοιπόν και το Υπουργείο Υγείας οφείλουν να πάρουν δημόσια μια ξεκάθαρη θέση και να μην θυσιάζουν τη δημόσια Υγεία στο βωμό του πολιτικού κόστους"". ""Συμφωνούν ότι η Θεία Κοινωνία δεν εγκυμονεί κινδύνους μετάδοσης του κορονοϊού; Δεν είναι θέμα ευσέβειας αλλά κοινωνικής ευθύνης. Και με τη Δημόσια υγεία δεν μπορούμε να παίζουμε"", καταλήγει η ανακοίνωση του γραφείου Τύπου του ΣΥΡΙΖΑ. *ΠΩΣ ΜΕΤΑΔΙΔΕΤΑΙ. Χρήσιμος οδηγός για να προστατευθείτε από τον κορονοϊό *ΤΑ ΝΟΣΟΚΟΜΕΙΑ ΑΝΑΦΟΡΑΣ. Ποια θα υποδέχονται τα κρούσματα κορονοϊού στην Ελλάδα. *ΤΑΞΙΔΙΑ. Κορονοϊός και αεροδρόμια: Τι να προσέξετε. *Η ΕΠΙΔΗΜΙΑ ΣΤΟΝ ΠΛΑΝΗΤΗ. Δείτε LIVE χάρτη με την εξέλιξη του κορονοϊού.'
output = summarizer('summarize: ' + text)
print(output[0]['summary_text'])
```
## Contact
If you have any questions/feedback about the model please e-mail one of the following authors:
```
[email protected]
[email protected]
[email protected]
```
## Citation
The model has been officially released with the article: [GreekT5: A Series of Greek Sequence-to-Sequence Models for News Summarization](https://arxiv.org/).
If you use the model, please cite the following:
```
@inproceedings{giarelis2024greekt5,
title={GreekT5: Sequence-to-Sequence Models for Greek News Summarization},
author={Giarelis, Nikolaos and Mastrokostas, Charalampos and Karacapilidis, Nikos},
booktitle={IFIP International Conference on Artificial Intelligence Applications and Innovations},
pages={60--73},
year={2024},
organization={Springer}
}
```
| null |
Non_BioNLP
|
# GreekT5 (umt5-base-greeksum)
A Greek news summarization model trained on [GreekSum](https://github.com/iakovosevdaimon/GreekSUM).
This model is part of a series of models trained as part of our research paper:
[Giarelis, N., Mastrokostas, C., & Karacapilidis, N. (2024) GreekT5: Sequence-to-Sequence Models for Greek News Summarization](https://link.springer.com/chapter/10.1007/978-3-031-63215-0_5) [\[arxiv\]](https://arxiv.org/abs/2311.07767)
The proposed models were trained and evaluated on the same dataset against [GreekBART](https://arxiv.org/abs/2304.00869).
For more information see the evaluation section below.
## Training dataset
The training dataset of `GreekT5-umt5-base-greeksum` is [GreekSum](https://github.com/iakovosevdaimon/GreekSUM/), which is the first news summarization dataset for the Greek Language.
This dataset contains ~151,000 news articles collected from [News24/7](https://www.news247.gr/), belonging to various topics (i.e., society, politics, economy, culture or world news).
For more information see: [https://arxiv.org/abs/2304.00869](https://arxiv.org/abs/2304.00869)
## Training configuration
We trained `google/umt5-base` [580 million parameters (~2.37 GB)] on the GreekSUM train split using the following parameters:
* GPU batch size = 1
* Total training epochs = 10
* AdamW optimizer (e = 1e−8, β1 = 0.9 and β2 = 0.0999)
* Learning rate = 3e−4
* No warmup steps
* 32-bit floating precision
* Tokenization
* maximum input token length = 1024
* maximum output token length = 128
* padding = ‘max_length’
* truncation = True
**Note:** T5-based models use a multi-task architecture, the prefix *‘summarize: ’* was prepended in each training sample.
## Evaluation
**Approach**|**ROUGE-1**|**ROUGE-2**|**ROUGE-L**|**BERTScore**
------------|-----------|-----------|-----------|-------------
TextRank|18.10|5.76|13.84|68.39
GreekT5 (mt5-small)|14.84|1.68|12.39|72.96
GreekT5 (umt5-small)|25.49|12.03|21.32|72.86
**GreekT5 (umt5-base)**|**26.67**|**13.00**|**22.42**|73.41
GreekBART|17.43|2.44|15.08|**75.89**
### Example code
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
model_name = 'IMISLab/GreekT5-umt5-base-greeksum'
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
summarizer = pipeline(
'summarization',
device = 'cpu',
model = model,
tokenizer = tokenizer,
max_new_tokens = 128,
truncation = True
)
text = 'Να πάρει ""ξεκάθαρη"" θέση σε σχέση με τον κίνδυνο μετάδοσης του κορονοϊού από τη Θεία Κοινωνία καλεί την κυβέρνηση και τον Πρωθυπουργό με ανακοίνωσή του τη Δευτέρα ο ΣΥΡΙΖΑ. ""Την ώρα που κλείνουν προληπτικά και ορθώς σχολεία, πανεπιστήμια, γήπεδα και λαμβάνονται ειδικά μέτρα ακόμη και για την ορκωμοσία της νέας Προέδρου της Δημοκρατίας, η Ιερά Σύνοδος της Εκκλησίας της Ελλάδος επιμένει ότι το μυστήριο της Θείας Κοινωνίας δεν εγκυμονεί κινδύνους μετάδοσης του κορονοϊού, καλώντας όμως τις ευπαθείς ομάδες να μείνουν σπίτι τους"", αναφέρει η αξιωματική αντιπολίτευση και συνεχίζει: ""Ωστόσο το πρόβλημα δεν είναι τι λέει η Ιερά Σύνοδος, αλλά τι λέει η Πολιτεία και συγκεκριμένα ο ΕΟΔΥ και το Υπουργείο Υγείας, που έχουν και την αποκλειστική κοινωνική ευθύνη για τη μη εξάπλωση του ιού και την προστασία των πολιτών"". ""Σε άλλες ευρωπαϊκές χώρες με εξίσου μεγάλο σεβασμό στη Χριστιανική πίστη και στο θρησκευτικό συναίσθημα, τα μυστήρια της Εκκλησίας είτε αναστέλλονται είτε τροποποιούν το τελετουργικό τους. Μόνο στη χώρα μας έχουμε το θλιβερό προνόμιο μιας πολιτείας που δεν τολμά να πει το αυτονόητο"", προσθέτει, τονίζοντας ότι ""η κυβέρνηση λοιπόν και το Υπουργείο Υγείας οφείλουν να πάρουν δημόσια μια ξεκάθαρη θέση και να μην θυσιάζουν τη δημόσια Υγεία στο βωμό του πολιτικού κόστους"". ""Συμφωνούν ότι η Θεία Κοινωνία δεν εγκυμονεί κινδύνους μετάδοσης του κορονοϊού; Δεν είναι θέμα ευσέβειας αλλά κοινωνικής ευθύνης. Και με τη Δημόσια υγεία δεν μπορούμε να παίζουμε"", καταλήγει η ανακοίνωση του γραφείου Τύπου του ΣΥΡΙΖΑ. *ΠΩΣ ΜΕΤΑΔΙΔΕΤΑΙ. Χρήσιμος οδηγός για να προστατευθείτε από τον κορονοϊό *ΤΑ ΝΟΣΟΚΟΜΕΙΑ ΑΝΑΦΟΡΑΣ. Ποια θα υποδέχονται τα κρούσματα κορονοϊού στην Ελλάδα. *ΤΑΞΙΔΙΑ. Κορονοϊός και αεροδρόμια: Τι να προσέξετε. *Η ΕΠΙΔΗΜΙΑ ΣΤΟΝ ΠΛΑΝΗΤΗ. Δείτε LIVE χάρτη με την εξέλιξη του κορονοϊού.'
output = summarizer('summarize: ' + text)
print(output[0]['summary_text'])
```
## Contact
If you have any questions/feedback about the model please e-mail one of the following authors:
```
[email protected]
[email protected]
[email protected]
```
## Citation
The model has been officially released with the article: [GreekT5: A Series of Greek Sequence-to-Sequence Models for News Summarization](https://arxiv.org/).
If you use the model, please cite the following:
```
@inproceedings{giarelis2024greekt5,
title={GreekT5: Sequence-to-Sequence Models for Greek News Summarization},
author={Giarelis, Nikolaos and Mastrokostas, Charalampos and Karacapilidis, Nikos},
booktitle={IFIP International Conference on Artificial Intelligence Applications and Innovations},
pages={60--73},
year={2024},
organization={Springer}
}
```
|
{"language": ["el"], "license": "apache-2.0", "metrics": ["bertscore", "rouge"], "pipeline_tag": "summarization", "widget": [{"text": "Να πάρει \"\"ξεκάθαρη\"\" θέση σε σχέση με τον κίνδυνο μετάδοσης του κορονοϊού από τη Θεία Κοινωνία καλεί την κυβέρνηση και τον Πρωθυπουργό με ανακοίνωσή του τη Δευτέρα ο ΣΥΡΙΖΑ. \"\"Την ώρα που κλείνουν προληπτικά και ορθώς σχολεία, πανεπιστήμια, γήπεδα και λαμβάνονται ειδικά μέτρα ακόμη και για την ορκωμοσία της νέας Προέδρου της Δημοκρατίας, η Ιερά Σύνοδος της Εκκλησίας της Ελλάδος επιμένει ότι το μυστήριο της Θείας Κοινωνίας δεν εγκυμονεί κινδύνους μετάδοσης του κορονοϊού, καλώντας όμως τις ευπαθείς ομάδες να μείνουν σπίτι τους\"\", αναφέρει η αξιωματική αντιπολίτευση και συνεχίζει: \"\"Ωστόσο το πρόβλημα δεν είναι τι λέει η Ιερά Σύνοδος, αλλά τι λέει η Πολιτεία και συγκεκριμένα ο ΕΟΔΥ και το Υπουργείο Υγείας, που έχουν και την αποκλειστική κοινωνική ευθύνη για τη μη εξάπλωση του ιού και την προστασία των πολιτών\"\". \"\"Σε άλλες ευρωπαϊκές χώρες με εξίσου μεγάλο σεβασμό στη Χριστιανική πίστη και στο θρησκευτικό συναίσθημα, τα μυστήρια της Εκκλησίας είτε αναστέλλονται είτε τροποποιούν το τελετουργικό τους. Μόνο στη χώρα μας έχουμε το θλιβερό προνόμιο μιας πολιτείας που δεν τολμά να πει το αυτονόητο\"\", προσθέτει, τονίζοντας ότι \"\"η κυβέρνηση λοιπόν και το Υπουργείο Υγείας οφείλουν να πάρουν δημόσια μια ξεκάθαρη θέση και να μην θυσιάζουν τη δημόσια Υγεία στο βωμό του πολιτικού κόστους\"\". \"\"Συμφωνούν ότι η Θεία Κοινωνία δεν εγκυμονεί κινδύνους μετάδοσης του κορονοϊού; Δεν είναι θέμα ευσέβειας αλλά κοινωνικής ευθύνης. Και με τη Δημόσια υγεία δεν μπορούμε να παίζουμε\"\", καταλήγει η ανακοίνωση του γραφείου Τύπου του ΣΥΡΙΖΑ. *ΠΩΣ ΜΕΤΑΔΙΔΕΤΑΙ. Χρήσιμος οδηγός για να προστατευθείτε από τον κορονοϊό *ΤΑ ΝΟΣΟΚΟΜΕΙΑ ΑΝΑΦΟΡΑΣ. Ποια θα υποδέχονται τα κρούσματα κορονοϊού στην Ελλάδα. *ΤΑΞΙΔΙΑ. Κορονοϊός και αεροδρόμια: Τι να προσέξετε. *Η ΕΠΙΔΗΜΙΑ ΣΤΟΝ ΠΛΑΝΗΤΗ. Δείτε LIVE χάρτη με την εξέλιξη του κορονοϊού.", "example_title": "Politics"}, {"text": "Με άρθρο της με τίτλο \"\"Επιστρέψτε στη θεά Ίριδα το σώμα της\"\", η εφημερίδα Washington Post τάσσεται υπέρ της επιστροφής των γλυπτών του Παρθενώνα, στην Αθήνα, στην κοιτίδα του δυτικού πολιτισμού, τώρα που οι συνθήκες έχουν αλλάξει για την πάλαι ποτέ αυτοκρατορία της Αγγλίας. Αναφερόμενη στις διαφορετικές απόψεις Ελλήνων και Βρετανών για τα γλυπτά, η συντάκτρια του άρθρου, τονίζει ότι το αίτημα επιστροφής έχει αποκτήσει μεγαλύτερο βάρος τώρα που το Ηνωμένο Βασίλειο εγκαταλείπει την Ευρωπαϊκή Ένωση. «Όταν ο Τόμας Μπρους, έβδομος κόμης του Έλγιν, και 11ος κόμης του Κινκαρντίν, ταξίδεψε στην Ακρόπολη στις αρχές της δεκαετίας του 1800, ως Βρετανός πρέσβης στην Οθωμανική Αυτοκρατορία, ο Σουλτάνος λέγεται ότι του έδωσε την άδεια να \"\"αφαιρέσει μερικά τμήματα λίθων με παλιές επιγραφές και μορφές\"\". Ο λόρδος το εξέλαβε ως άδεια να αφαιρέσει, περίπου, 17 αγάλματα από τα αετώματα, 15 μετώπες, και 247 πόδια (περίπου 75 μέτρα) της ζωφόρου από τον Παρθενώνα για να τα φέρει στην καλή μας Αγγλία» αναφέρει στο άρθρο της η Washington Post. Και συνεχίζει λέγοντας ότι «οι καιροί όμως άλλαξαν και αυτό που θεωρούνταν πιο δικαιολογημένο τότε, σήμερα θεωρείται ευρέως ως μια ασυνείδητη πράξη». Σε μία έμμεση αναφορά στο Brexit, και υπεραμυνόμενη της επιστροφής των γλυπτών στην Ελλάδα, η συντάκτρια του άρθρου της Washington Post, διερωτάται: «Γιατί να παραμείνουν τα μάρμαρα στη φύλαξη της χώρας που επιμένει ότι ανήκει μόνο στον εαυτό της;» και σημειώνει: «Η Ελλάδα τιμάται σήμερα ως λίκνο του δυτικού πολιτισμού, και ποιοί παρά οι Έλληνες θα μπορούσαν να στεγάσουν τον πολιτισμό αυτό;».", "example_title": "Culture"}, {"text": "Το Διεθνές Νομισματικό Ταμείο (ΔΝΤ) προβλέπει ένα χρέος ρεκόρ των πλούσιων χωρών το 2014 και κρίνει \"\"πιθανό\"\" να υπάρξει επιπλέον συμβολή των πιο εύπορων προσώπων και των πολυεθνικών επιχειρήσεων σε μια μείωση των ελλειμμάτων, σύμφωνα με έκθεσή του η οποία δόθηκε σήμερα στη δημοσιότητα. \"\"Φαίνεται ότι υπάρχει ένα επαρκές περιθώριο σε πολλές ανεπτυγμένες χώρες για να αντληθούν επιπλέον έσοδα από τα πιο υψηλά εισοδήματα\"\", υπογραμμίζει το ΔΝΤ στην έκθεσή του για την δημοσιονομική επιτήρηση. Κατά μέσον όρο, το δημόσιο χρέος των ανεπτυγμένων χωρών αναμένεται να φτάσει το \"\"ιστορικό υψηλό\"\" του 110% του ΑΕΠ τους το 2014, δηλαδή θα βρίσκεται 35 μονάδες πιο πάνω από το ποσοστό του 2007, επισημαίνει το ΔΝΤ στην έκθεσή του. Με μια αναλογία χρέους/ΑΕΠ της τάξης του 242,3% που προβλέπεται να έχει το 2014, η Ιαπωνία αναμένεται να βρίσκεται πρώτη στον κατάλογο των υπερχρεωμένων ανεπτυγμένων χωρών, ακολουθούμενη από την Ελλάδα (174%), την Ιταλία (133,1%) και την Πορτογαλία (125,3%). Οι ΗΠΑ, οι οποίες έχουν παραλύσει από ένα δημοσιονομικό αδιέξοδο και απειλούνται από μια πιθανή στάση πληρωμών, θα δουν το χρέος τους να ανεβαίνει στο 107,3% του ΑΕΠ τους το 2014, δηλαδή θα βρίσκονται πολύ πιο μπροστά από την Γαλλία και το 94,8% στο οποίο αναμένεται ότι θα ανέρχεται την ερχόμενη χρονιά το χρέος της. Η δεύτερη οικονομική δύναμη του κόσμου, η Κίνα δίνει την εικόνα του καλού μαθητή με μια αναλογία χρέους/ΑΕΠ μόνον 20,9% την ερχόμενη χρονιά, σύμφωνα με το ΔΝΤ. \"\"Παρά τις προόδους στη μείωση των ελλειμμάτων, οι δημοσιονομικές αδυναμίες παραμένουν βαθιές στις ανεπτυγμένες χώρες\"\", επισημαίνεται στην έκθεση. Απέναντι σε αυτές τις ανισορροπίες, το ΔΝΤ εκφράζει την ανησυχία του καθώς βλέπει \"\"ένα φορολογικό σύστημα υπό πίεση\"\", το οποίο ευνοεί τον ανταγωνισμό μεταξύ των κρατών και επιτρέπει στους εύπορους φορολογούμενους και στις πολυεθνικές να ελαφρύνουν τους φόρους τους. Μόνον στις ΗΠΑ, το ΔΝΤ υπολογίζει σε 60 δισεκατομμύρια δολάρια τα έσοδα που φέρεται ότι χάνονται λόγω τεχνικών βελτιστοποίησης της φορολογίας των πολυεθνικών. Το ΔΝΤ επισημαίνει ότι οι τελευταίες δεκαετίες έχουν σηματοδοτηθεί από μια \"\"θεαματική άνοδο\"\" του πλούτου του \"\"1%\"\" των πιο πλούσιων, κυρίως στον αγγλοσαξονικό κόσμο, χωρίς ωστόσο η φορολογία να έχει προσαρμοστεί σε αυτήν την εξέλιξη. \"\"Σε πολλές χώρες θα ήταν πιθανό να επιβληθούν επιπλέον φόροι σε αυτούς που διαθέτουν τα πιο υψηλά εισοδήματα\"\", υπογραμμίζει το ΔΝΤ, το οποίο κρίνει εξάλλου \"\"συνετό\"\" τον υπολογισμό σε 4.500 δισεκατομμύρια δολάρια των διαθεσίμων που αποκρύπτονται από ιδιώτες σε φορολογικούς παραδείσους. Οι χώρες της Ομάδας των Είκοσι (G20), οι υπουργοί Οικονομικών των οποίων συναντώνται αυτήν την εβδομάδα στην Ουάσινγκτον, ξεκίνησαν πρόσφατα πρωτοβουλίες για την πάταξη της φοροδιαφυγής.", "example_title": "Economics"}], "model-index": [{"name": "IMISLab/GreekT5-umt5-base-greeksum", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "GreekSUM", "type": "greeksum", "config": "default", "split": "test"}, "metrics": [{"type": "rouge", "value": 26.67, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 13.0, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 22.42, "name": "ROUGE-L", "verified": true}, {"type": "bertscore", "value": 73.41, "name": "BERTScore", "verified": true}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 46,783 |
skywood/NHNDQ-nllb-finetuned-en2ko-ct2-float16
|
skywood
|
translation
|
[
"transformers",
"translation",
"en",
"ko",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | 2024-04-07T07:12:12Z |
2024-04-08T11:50:57+00:00
| 79 | 1 |
---
language:
- en
- ko
license: cc-by-4.0
tags:
- translation
---
I only did ctranslate2 convert to the original.
cmd> ct2-transformers-converter --model NHNDQ/nllb-finetuned-en2ko --quantization float16 --output_dir NHNDQ-nllb-finetuned-en2ko-ct2
All copyrights belong to the original authors and the CT model may be deleted upon request. Below is the original model information.
Original URL : https://huggingface.co/NHNDQ/nllb-finetuned-en2ko
## Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M ct2 model
* Developed by: DanielHeo
*
## Original Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M model
* Developed by: Jisu Kim, Juhwan Lee, TakSung Heo, and Minsu Jeong
* Model Type: Translation
* Language(s):
* Source Language: English
* Target Language: Korean
* License: CC-BY-4.0
## Dataset
* [AI-hub dataset](https://www.aihub.or.kr/)
## BLEU Score
* Deepl translation: 22.83
* Fine-tune nllb: 33.66
## Uses
This model can be used for translation and text-to-text generation
## Data Augmentation with backtranslation task
You can exercise korean data augmentation task with python package [KoTAN](https://github.com/KoJLabs/KoTAN/tree/main)
| null |
Non_BioNLP
|
I only did ctranslate2 convert to the original.
cmd> ct2-transformers-converter --model NHNDQ/nllb-finetuned-en2ko --quantization float16 --output_dir NHNDQ-nllb-finetuned-en2ko-ct2
All copyrights belong to the original authors and the CT model may be deleted upon request. Below is the original model information.
Original URL : https://huggingface.co/NHNDQ/nllb-finetuned-en2ko
## Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M ct2 model
* Developed by: DanielHeo
*
## Original Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M model
* Developed by: Jisu Kim, Juhwan Lee, TakSung Heo, and Minsu Jeong
* Model Type: Translation
* Language(s):
* Source Language: English
* Target Language: Korean
* License: CC-BY-4.0
## Dataset
* [AI-hub dataset](https://www.aihub.or.kr/)
## BLEU Score
* Deepl translation: 22.83
* Fine-tune nllb: 33.66
## Uses
This model can be used for translation and text-to-text generation
## Data Augmentation with backtranslation task
You can exercise korean data augmentation task with python package [KoTAN](https://github.com/KoJLabs/KoTAN/tree/main)
|
{"language": ["en", "ko"], "license": "cc-by-4.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 46,784 |
XSY/t5-small-finetuned-xsum
|
XSY
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-11-09T13:40:46+00:00
| 123 | 0 |
---
{}
---
这个模型是根据这个一步一步完成的,如果想自己微调,请参考https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
This model is completed step by step according to this, if you want to fine-tune yourself, please refer to https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.6901
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4500
- Rouge1: 28.6901
- Rouge2: 8.0102
- Rougel: 22.6087
- Rougelsum: 22.6105
- Gen Len: 18.824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6799 | 1.0 | 25506 | 2.4500 | 28.6901 | 8.0102 | 22.6087 | 22.6105 | 18.824 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
这个模型是根据这个一步一步完成的,如果想自己微调,请参考https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
This model is completed step by step according to this, if you want to fine-tune yourself, please refer to https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.6901
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4500
- Rouge1: 28.6901
- Rouge2: 8.0102
- Rougel: 22.6087
- Rougelsum: 22.6105
- Gen Len: 18.824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6799 | 1.0 | 25506 | 2.4500 | 28.6901 | 8.0102 | 22.6087 | 22.6105 | 18.824 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{}
|
task
|
[
"SUMMARIZATION"
] | 46,785 |
tamilnlpSLIIT/whisper-ta
|
tamilnlpSLIIT
|
automatic-speech-recognition
|
[
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"ta",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2024-05-19T16:46:12Z |
2024-05-19T16:46:12+00:00
| 7 | 0 |
---
language:
- ta
license: apache-2.0
metrics:
- wer
tags:
- whisper-event
model-index:
- name: Whisper Tamil Medium - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: ta_in
split: test
metrics:
- type: wer
value: 6.97
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: ta
split: test
metrics:
- type: wer
value: 6.5
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tamil Medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Tamil data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-tamil-medium", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-tamil-medium", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [IISc-MILE Tamil ASR Corpus](https://www.openslr.org/127/)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#tamil-labelled--total-duration-is-116024-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Microsoft Speech Corpus (Indian Languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
- Babel ASR Corpus
Evaluation Data:
- [Microsoft Speech Corpus (Indian Languages) Test Set](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
- [IISc-MILE Test Set](https://www.openslr.org/127/)
- Babel Test Set
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 17500
- training_steps: 33892 (Initially set to 84730 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tamil Medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Tamil data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-tamil-medium", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-tamil-medium", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [IISc-MILE Tamil ASR Corpus](https://www.openslr.org/127/)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#tamil-labelled--total-duration-is-116024-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Microsoft Speech Corpus (Indian Languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
- Babel ASR Corpus
Evaluation Data:
- [Microsoft Speech Corpus (Indian Languages) Test Set](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
- [IISc-MILE Test Set](https://www.openslr.org/127/)
- Babel Test Set
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 17500
- training_steps: 33892 (Initially set to 84730 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
|
{"language": ["ta"], "license": "apache-2.0", "metrics": ["wer"], "tags": ["whisper-event"], "model-index": [{"name": "Whisper Tamil Medium - Vasista Sai Lodagala", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "google/fleurs", "type": "google/fleurs", "config": "ta_in", "split": "test"}, "metrics": [{"type": "wer", "value": 6.97, "name": "WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_11_0", "type": "mozilla-foundation/common_voice_11_0", "config": "ta", "split": "test"}, "metrics": [{"type": "wer", "value": 6.5, "name": "WER"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 46,786 |
fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Query",
"Document",
"Retrieval",
"Description",
"JSON",
"custom_code",
"en",
"dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-16T13:40:02Z |
2024-06-16T13:40:17+00:00
| 5 | 0 |
---
datasets:
- fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Query
- Document
- Retrieval
- Description
- JSON
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
general domain
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| null |
Non_BioNLP
|
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
general domain
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
{"datasets": ["fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Query", "Document", "Retrieval", "Description", "JSON"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,787 |
HooshvareLab/bert-fa-base-uncased-clf-persiannews
|
HooshvareLab
|
text-classification
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-05-18T20:51:07+00:00
| 2,153 | 8 |
---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### Persian News
A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.
1. Economic
2. International
3. Political
4. Science Technology
5. Cultural Art
6. Sport
7. Medical
| Label | # |
|:------------------:|:----:|
| Social | 2170 |
| Economic | 1564 |
| International | 1975 |
| Political | 2269 |
| Science Technology | 2436 |
| Cultural Art | 2558 |
| Sport | 1381 |
| Medical | 2085 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
|:-----------------:|:-----------:|:-----------:|:-----:|
| Persian News | 97.44* | 97.19 | 95.79 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Text Classification | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
| null |
Non_BioNLP
|
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### Persian News
A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.
1. Economic
2. International
3. Political
4. Science Technology
5. Cultural Art
6. Sport
7. Medical
| Label | # |
|:------------------:|:----:|
| Social | 2170 |
| Economic | 1564 |
| International | 1975 |
| Political | 2269 |
| Science Technology | 2436 |
| Cultural Art | 2558 |
| Sport | 1381 |
| Medical | 2085 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
|:-----------------:|:-----------:|:-----------:|:-----:|
| Persian News | 97.44* | 97.19 | 95.79 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Text Classification | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
|
{"language": "fa", "license": "apache-2.0"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,788 |
Unbabel/wmt20-comet-qe-da-v2-marian
|
Unbabel
|
translation
|
[
"translation",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"license:apache-2.0",
"region:us"
] | 2024-05-28T10:18:50Z |
2024-05-28T10:45:42+00:00
| 0 | 0 |
---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: apache-2.0
pipeline_tag: translation
---
Marian version of [wmt20-comet-qe-da-v2](https://huggingface.co/Unbabel/wmt20-comet-qe-da-v2).
Credits to Microsoft Translate Team!
# Paper
TBA
# License
Apache-2.0
# Usage
TBA
# Intended uses
Our model is intented to be used for **MT evaluation**.
Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation.
# Languages Covered:
This model builds on top of XLM-R which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
| null |
Non_BioNLP
|
Marian version of [wmt20-comet-qe-da-v2](https://huggingface.co/Unbabel/wmt20-comet-qe-da-v2).
Credits to Microsoft Translate Team!
# Paper
TBA
# License
Apache-2.0
# Usage
TBA
# Intended uses
Our model is intented to be used for **MT evaluation**.
Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation.
# Languages Covered:
This model builds on top of XLM-R which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
|
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"], "license": "apache-2.0", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 46,789 |
antonkurylo/t5-small-billsum
|
antonkurylo
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-22T19:02:06Z |
2024-10-23T20:28:36+00:00
| 75 | 0 |
---
base_model: t5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: t5-small-billsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-billsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9564
- Rouge1: 50.3551
- Rouge2: 29.3717
- Rougel: 39.4102
- Rougelsum: 43.6247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.5468 | 1.0 | 1185 | 2.0937 | 48.625 | 27.492 | 37.671 | 41.4628 |
| 2.2867 | 2.0 | 2370 | 2.0155 | 49.2547 | 28.248 | 38.39 | 42.3374 |
| 2.2241 | 3.0 | 3555 | 1.9796 | 49.8802 | 28.8333 | 38.8829 | 43.027 |
| 2.1925 | 4.0 | 4740 | 1.9620 | 50.07 | 28.9961 | 39.1086 | 43.3251 |
| 2.1791 | 5.0 | 5925 | 1.9576 | 50.2626 | 29.1819 | 39.2415 | 43.4781 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-billsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9564
- Rouge1: 50.3551
- Rouge2: 29.3717
- Rougel: 39.4102
- Rougelsum: 43.6247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.5468 | 1.0 | 1185 | 2.0937 | 48.625 | 27.492 | 37.671 | 41.4628 |
| 2.2867 | 2.0 | 2370 | 2.0155 | 49.2547 | 28.248 | 38.39 | 42.3374 |
| 2.2241 | 3.0 | 3555 | 1.9796 | 49.8802 | 28.8333 | 38.8829 | 43.027 |
| 2.1925 | 4.0 | 4740 | 1.9620 | 50.07 | 28.9961 | 39.1086 | 43.3251 |
| 2.1791 | 5.0 | 5925 | 1.9576 | 50.2626 | 29.1819 | 39.2415 | 43.4781 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
{"base_model": "t5-small", "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "t5-small-billsum", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 46,790 |
4yo1/llama3-pre1-ds-lora1
|
4yo1
|
translation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3-ko",
"translation",
"en",
"ko",
"dataset:recipes",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-18T00:57:07Z |
2024-07-18T01:07:19+00:00
| 2,088 | 0 |
---
datasets:
- recipes
language:
- en
- ko
library_name: transformers
license: mit
pipeline_tag: translation
tags:
- llama-3-ko
---
### Model Card for Model ID
### Model Details
Model Card: llama3-pre1-ds-lora1 with Fine-Tuning
Model Overview
Model Name: llama3-pre1-ds-lora1
Model Type: Transformer-based Language Model
Model Size: 8 billion parameters
by: 4yo1
Languages: English and Korean
### Model Description
llama3-pre1-ds-lora1 is a language model pre-trained on a diverse corpus of English and Korean texts.
This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.
### how to use - sample code
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("4yo1/llama3-pre1-ds-lora1")
model = AutoModel.from_pretrained("4yo1/llama3-pre1-ds-lora1")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-ds-lora1")
```
datasets:
- recipes
license: mit
| null |
Non_BioNLP
|
### Model Card for Model ID
### Model Details
Model Card: llama3-pre1-ds-lora1 with Fine-Tuning
Model Overview
Model Name: llama3-pre1-ds-lora1
Model Type: Transformer-based Language Model
Model Size: 8 billion parameters
by: 4yo1
Languages: English and Korean
### Model Description
llama3-pre1-ds-lora1 is a language model pre-trained on a diverse corpus of English and Korean texts.
This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.
### how to use - sample code
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("4yo1/llama3-pre1-ds-lora1")
model = AutoModel.from_pretrained("4yo1/llama3-pre1-ds-lora1")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-ds-lora1")
```
datasets:
- recipes
license: mit
|
{"datasets": ["recipes"], "language": ["en", "ko"], "library_name": "transformers", "license": "mit", "pipeline_tag": "translation", "tags": ["llama-3-ko"]}
|
task
|
[
"TRANSLATION"
] | 46,791 |
Helsinki-NLP/opus-mt-vi-fr
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T12:08:36+00:00
| 111 | 0 |
---
language:
- vi
- fr
license: apache-2.0
tags:
- translation
---
### vie-fra
* source group: Vietnamese
* target group: French
* OPUS readme: [vie-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.fra | 34.2 | 0.544 |
### System Info:
- hf_name: vie-fra
- source_languages: vie
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'fr']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: fra
- short_pair: vi-fr
- chrF2_score: 0.544
- bleu: 34.2
- brevity_penalty: 0.955
- ref_len: 11519.0
- src_name: Vietnamese
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: fr
- prefer_old: False
- long_pair: vie-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| null |
Non_BioNLP
|
### vie-fra
* source group: Vietnamese
* target group: French
* OPUS readme: [vie-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.fra | 34.2 | 0.544 |
### System Info:
- hf_name: vie-fra
- source_languages: vie
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'fr']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: fra
- short_pair: vi-fr
- chrF2_score: 0.544
- bleu: 34.2
- brevity_penalty: 0.955
- ref_len: 11519.0
- src_name: Vietnamese
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: fr
- prefer_old: False
- long_pair: vie-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
{"language": ["vi", "fr"], "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 46,792 |
ahearnlr/bert-emotion
|
ahearnlr
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-30T15:22:59Z |
2023-05-30T15:30:44+00:00
| 13 | 0 |
---
datasets:
- tweet_eval
license: apache-2.0
metrics:
- precision
- recall
tags:
- generated_from_trainer
model-index:
- name: bert-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: precision
value: 0.7505623807659564
name: Precision
- type: recall
value: 0.7243031825553111
name: Recall
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1413
- Precision: 0.7506
- Recall: 0.7243
- Fscore: 0.7340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 |
| 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 |
| 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1413
- Precision: 0.7506
- Recall: 0.7243
- Fscore: 0.7340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 |
| 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 |
| 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["precision", "recall"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "validation", "args": "emotion"}, "metrics": [{"type": "precision", "value": 0.7505623807659564, "name": "Precision"}, {"type": "recall", "value": 0.7243031825553111, "name": "Recall"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,793 |
yam3333/paraphrase-xlm-r-multilingual-v1-finetuned
|
yam3333
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:383",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-xlm-r-multilingual-v1",
"base_model:finetune:sentence-transformers/paraphrase-xlm-r-multilingual-v1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-17T15:55:40Z |
2024-11-17T15:56:43+00:00
| 7 | 0 |
---
base_model: sentence-transformers/paraphrase-xlm-r-multilingual-v1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:383
- loss:CosineSimilarityLoss
widget:
- source_sentence: ब्यवसायसञ्चालन नभएको सिफारिस गर्न सेवा शुल्क तथा दस्तुर कति लाग्छ
sentences:
- <unk>
- <unk>
- <unk>
- source_sentence: स्वास्थ्य संस्था दर्ता गर्न लाग्ने सेवा शुल्क कति ह
sentences:
- <unk>
- <unk>
- <unk>
- source_sentence: अस्थायीबसोबास सिफारिस गर्नको लागी आवश्यक कागजातहरु के के चाहिन्छ
sentences:
- <unk>
- <unk>
- <unk>
- source_sentence: पहिलो पल्ट सम्पत्ति कर तिर्न आवश्यक कागजातहरु के के हुन्
sentences:
- <unk>
- निःशुल्क
- <unk>
- source_sentence: आर्थिक अवस्था बलियो वा सम्पन्नता प्रमाणित गर्न आवश्यक कागजातहरु
के के हुन्
sentences:
- <unk>
- <unk>
- <unk>
---
# SentenceTransformer based on sentence-transformers/paraphrase-xlm-r-multilingual-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1) <!-- at revision 000e995b707ecea1b901208915ff3533783ec13d -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yam3333/paraphrase-xlm-r-multilingual-v1-finetuned")
# Run inference
sentences = [
'आर्थिक अवस्था बलियो वा सम्पन्नता प्रमाणित गर्न आवश्यक कागजातहरु के के हुन्',
'<unk>',
'<unk>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 383 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 383 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 9 tokens</li><li>mean: 17.3 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------------------|:-------------------|:-----------------|
| <code>विज्ञापन कर तिर्न लाग्ने समय कति हो</code> | <code><unk></code> | <code>1.0</code> |
| <code>संरक्षक सिफारिस (संस्थागत) गर्न कति समय लाग्छ</code> | <code><unk></code> | <code>1.0</code> |
| <code>विपन्नविद्यार्थी छात्रबृत्ति सिफारिस गर्नु परेमा सेवा शुल्क कति हो</code> | <code><unk></code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.1.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/paraphrase-xlm-r-multilingual-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1) <!-- at revision 000e995b707ecea1b901208915ff3533783ec13d -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yam3333/paraphrase-xlm-r-multilingual-v1-finetuned")
# Run inference
sentences = [
'आर्थिक अवस्था बलियो वा सम्पन्नता प्रमाणित गर्न आवश्यक कागजातहरु के के हुन्',
'<unk>',
'<unk>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 383 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 383 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 9 tokens</li><li>mean: 17.3 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------------------|:-------------------|:-----------------|
| <code>विज्ञापन कर तिर्न लाग्ने समय कति हो</code> | <code><unk></code> | <code>1.0</code> |
| <code>संरक्षक सिफारिस (संस्थागत) गर्न कति समय लाग्छ</code> | <code><unk></code> | <code>1.0</code> |
| <code>विपन्नविद्यार्थी छात्रबृत्ति सिफारिस गर्नु परेमा सेवा शुल्क कति हो</code> | <code><unk></code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.1.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-xlm-r-multilingual-v1", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:383", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "ब्यवसायसञ्चालन नभएको सिफारिस गर्न सेवा शुल्क तथा दस्तुर कति लाग्छ", "sentences": ["<unk>", "<unk>", "<unk>"]}, {"source_sentence": "स्वास्थ्य संस्था दर्ता गर्न लाग्ने सेवा शुल्क कति ह", "sentences": ["<unk>", "<unk>", "<unk>"]}, {"source_sentence": "अस्थायीबसोबास सिफारिस गर्नको लागी आवश्यक कागजातहरु के के चाहिन्छ", "sentences": ["<unk>", "<unk>", "<unk>"]}, {"source_sentence": "पहिलो पल्ट सम्पत्ति कर तिर्न आवश्यक कागजातहरु के के हुन्", "sentences": ["<unk>", "निःशुल्क", "<unk>"]}, {"source_sentence": "आर्थिक अवस्था बलियो वा सम्पन्नता प्रमाणित गर्न आवश्यक कागजातहरु के के हुन्", "sentences": ["<unk>", "<unk>", "<unk>"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,794 |
mahsaBa76/bge-base-custom-matryoshka
|
mahsaBa76
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:278",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-07T19:28:48Z |
2025-01-07T19:28:58+00:00
| 7 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:278
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: How does Bitcoin's P2P network prevent malicious nodes from flooding
the network with invalid blocks or transactions?
sentences:
- 'paper-title: The Bitcoin Lightning Network: Scalable Off-Chain Instant Payments
\subsection*{8.4 Payment Routing}
It is theoretically possible to build a route map implicitly from observing 2
-of-2 multisigs on the blockchain to build a routing table. Note, however, this
is not feasible with pay-to-script-hash transaction outputs, which can be resolved
out-of-band from the bitcoin protocol via a third party routing service. Building
a routing table will become necessary for large operators (e.g. BGP, Cjdns). Eventually,
with optimizations, the network will look a lot like the correspondent banking
network, or Tier-1 ISPs. Similar to how packets still reach their destination
on your home network connection, not all participants need to have a full routing
table. The core Tier-1 routes can be online all the time - while nodes at the
edges, such as average users, would be connected intermittently.
Node discovery can occur along the edges by pre-selecting and offering partial
routes to well-known nodes.
\subsection*{8.5 Fees}
Lightning Network fees, which differ from blockchain fees, are paid directly between
participants within the channel. The fees pay for the time-value of money for
consuming the channel for a determined maximum period of time, and for counterparty
risk of non-communication.
Counterparty risk for fees only exist with one''s direct channel counterparty.
If a node two hops away decides to disconnect and their transaction gets broadcast
on the blockchain, one''s direct counterparties should not broadcast on the blockchain,
but continue to update via novation with a new Commitment Transaction. See the
Decrementing Timelocks entry in the HTLC section for more information about counterparty
risk.
The time-value of fees pays for consuming time (e.g. 3 days) and is conceptually
equivalent to a gold lease rate without custodial risk; it is the time-value for
using up the access to money for a very short duration. Since certain paths may
become very profitable in one direction, it is possible for fees to be negative
to encourage the channel to be available for those profitable paths.
\section*{9 Risks}
The primary risks relate to timelock expiration. Additionally, for core nodes
and possibly some merchants to be able to route funds, the keys must be held online
for lower latency. However, end-users and nodes are able to keep their private
keys firewalled off in cold storage.
\subsection*{9.1 Improper Timelocks}
Participants must choose timelocks with sufficient amounts of time. If insufficient
time is given, it is possible that timelocked transactions believed to be invalid
will become valid, enabling coin theft by the counterparty. There is a trade-off
between longer timelocks and the time-value of money. When writing wallet and
Lightning Network application software, it is necessary to ensure that sufficient
time is given and users are able to have their transactions enter into the blockchain
when interacting with non-cooperative or malicious channel counterparties.
\subsection*{9.2 Forced Expiration Spam}
Forced expiration of many transactions may be the greatest systemic risk when
using the Lightning Network. If a malicious participant creates many channels
and forces them all to expire at once, these may overwhelm block data capacity,
forcing expiration and broadcast to the blockchain. The result would be mass spam
on the bitcoin network. The spam may delay transactions to the point where other
locktimed transactions become valid.
This may be mitigated by permitting one transaction replacement on all pending
transactions. Anti-spam can be used by permitting only one transaction replacement
of a higher sequence number by the inverse of an even or odd number. For example,
if an odd sequence number was broadcast, permit a replacement to a higher even
number only once. Transactions would use the sequence number in an orderly way
to replace other transactions. This mitigates the risk assuming honest miners.
This attack is extremely high risk, as incorrect broadcast of Commitment Transactions
entail a full penalty of all funds in the channel.
Additionally, one may attempt to steal HTLC transactions by forcing a timeout
transaction to go through when it should not. This can be easily mitigated by
having each transfer inside the channel be lower than the total transaction fees
used. Since transactions are extremely cheap and do not hit the blockchain with
cooperative channel counterparties, large transfers of value can be split into
many small transfers. This attempt can only work if the blocks are completely
full for a long time. While it is possible to mitigate it using a longer HTLC
timeout duration, variable block sizes may become common, which may need mitigations.
If this type of transaction becomes the dominant form of transactions which are
included on the blockchain, it may become necessary to increase the block size
and run a variable blocksize structure and timestop flags as described in the
section below. This can create sufficient penalties and disincentives to be highly
unprofitable and unsuccessful for attackers, as attackers lose all their funds
from broadcasting the wrong transaction, to the point where it will never occur.'
- 'paper-title: OmniLedger: A Secure, Scale-Out, Decentralized Ledger via Sharding
Fig. 11: Bootstrap bandwidth consumption with state blocks.\\[0pt]
to create the UTXO state. For this experiment, we reconstructed Bitcoin''s blockchain
[5], [41] and created a parallel OmniLedger blockchain with weekly state blocks.
Figure 11 depicts the bandwidth overhead of a validator that did not follow the
state for the first 100 days. As we can see, the state block approach is better
if the validator is outdated for more than 19 days or 2736 Bitcoin blocks.
The benefit might not seem substantial for Bitcoin, but in OmniLedger, 2736 blocks
are created in less than 8 hours, meaning that for one day-long epochs, the state
block approach is significantly better. If a peak throughput is required and 16
MB blocks are deployed, we expect reduced bandwidth consumption close to two orders
of magnitude.
\section*{IX. Related Work}
The growing interests in scaling blockchains have produced a number of prominent
systems that we compare in Table IV. ByzCoin [32] is a first step to scalable
BFT consensus, but cannot scale-out. Elastico is the first open scale-out DL,
however, it suffers from performance and security challenges that we have already
discussed in Section II. RSCoin [16] proposes sharding as a scalable approach
for centrally banked cryptocurrencies. RSCoin relies on a trusted source of randomness
for sharding and auditing, making its usage problematic in trustless settings.
Furthermore, to validate transactions, each shard has to coordinate with the client
and instead of running BFT, RSCoin uses a simple two-phase commit, assuming that
safety is preserved if the majority of validators is honest. This
TABLE IV: Comparison of Distributed Ledger Systems
\begin{center}
\begin{tabular}{ccccccc}
\hline
System & Scale-Out & \begin{tabular}{c}
Cross-Shard \\
Transaction Atomicity \\
\end{tabular} & State Blocks & \begin{tabular}{c}
Measured Scalability \\
(\# of Validators) \\
\end{tabular} & \begin{tabular}{c}
Estimated \\
Time to Fail \\
\end{tabular} & \begin{tabular}{c}
Measured \\
Latency \\
\end{tabular} \\
\hline
RSCoin [16] & In Permissioned & Partial & No & 30 & N/A & 1 sec \\
Elastico [34] & In PoW & No & No & 1600 & 1 hour & 800 sec \\
ByzCoin [32] & No & N/A & No & 1008 & 19 years & 40 sec \\
Bitcoin-NG [21] & No & N/A & No & 1000 & N/A & 600 sec \\
PBFT [9], [11] & No & N/A & No & 16 & N/A & 1 sec \\
Nakamoto [36] & No & N/A & No & 4000 & N/A & 600 sec \\
OmniLedger & Yes & Yes & Yes & 2400 & 68.5 years & 1.5 sec \\
\hline
\end{tabular}
\end{center}
approach, however, does not protect from double spending attempts by a malicious
client colluding with a validator.
In short, prior solutions [16], [32], [34] achieve only two out of the three desired
properties; decentralization, long-term security, and scale-out, as illustrated
in Figure 1. OmniLedger overcomes this issue by scaling out, as far as throughput
is concerned, and by maintaining consistency to the level required for safety,
without imposing a total order.
Bitcoin-NG scales Bitcoin without changing the consensus algorithm by observing
that the PoW process does not have to be the same as the transaction validation
process; this results in two separate timelines: one slow for PoW and one fast
for transaction validation. Although Bitcoin-NG significantly increases the throughput
of Bitcoin, it is still susceptible to the same attacks as Bitcoin [24], [3].
Other efforts to scale blockchains include: Tendermint [9], a protocol similar
to PBFT for shard-level consensus that does not scale due to its similarities
to PBFT, and the Lightning Network [40], an off-chain payment protocol for Bitcoin
(also compatible to OmniLedger); it limits the amount of information committed
to the blockchain.'
- "Datatype: lecture_note, Title: Lecture 4: Peer to Peer Networking for Blockchains\n\
\nHow does broadcast take only $O(\\log N)$ steps? We first need to understand\
\ the gossip-flooding-based broadcast protocol. The flooding protocol mimics the\
\ spread of an epidemic. Once a node is ``infected\", it infects its peers and\
\ forever stay's infected. It is easy to see that the spread of information will\
\ happen exponentially; hence the information will take $O(\\log N)$ hops to spread\
\ to all nodes. To formally understand the spread, we note that $d$-regular graphs\
\ with $d\\geq 3$ are an \\textit{expander graph} for large sizes ($|V|$) with\
\ high probability. An expander graph is a connected but sparse graph ($|E|=O(|V|)$)\
\ with the following property: $|\\partial A| \\geq \\epsilon|A|$ for any connected\
\ sub-graph $A$ with $|A|<0.5|V|$. Here, $|\\partial A|$ refers to the number\
\ of vertices outside $A$ with at least one neighbor in $A$. A gossip message\
\ originates with $A(0)$ as the broadcasting node with $|A(0)|=1$, in the next\
\ hop, it will spread to $\\partial A(0)$ with $|A(1)|\\geq (1+\\epsilon)|A(0)|$.\
\ This recursion continues and we have $|A(k)|\\geq(1+\\epsilon)^kA(0)$. Thus,\
\ the number of steps to reach half the number of nodes is logarithmic in the\
\ number of nodes. It can be shown that the other half of the nodes can also be\
\ covered in $O(\\log N)$ time.\n\n\n%Engineering issues (peer discovery, bootstrap,\
\ churn). Implementation connections (to the lab experiment). Validation of tx,\
\ blocks. How does that impact networking? What about skipping validation and\
\ doing cut-through routing? Compact blocks. (RR)\n\n\\section*{Bitcoin P2P network:\
\ A systems view}\nIn Bitcoin, peers connect to each other and communicate using\
\ the TCP protocol. The codebase allows for eight outgoing connections and up\
\ to 117 incoming connections. The network has a high churn rate (rate at which\
\ users enter/leave the system); hence, the node must be ready to connect to new\
\ peers. Moreover, to ensure that the peers we are connecting to are chosen randomly,\
\ the node keeps a large list of nodes running Bitcoin in the form of their (IP,\
\ port) tuple and establishes a connection to one of them randomly when a slot\
\ opens up. \n\nHow does a node bootstrap its list of peers? This happens by\
\ connecting to a set of DNS seed nodes. The seed nodes are not heavily decentralized;\
\ hence completely relying on the peer list provided by them is not advisable.\
\ On connecting to the initial set of peers, a node asks its neighbors for their\
\ peer list using {\\tt getAddr} and {\\tt Addr} messages. The node keeps refreshing\
\ its peer list regularly by exchanging peer lists with its peers. \n\nTransmission\
\ of all block and transactions happen through the inventory message {\\tt inv},\
\ on receiving an {\\tt inv} message the node checks if it has the block or the\
\ transaction in its local storage. If not, it sends the {\\tt getData} message\
\ to fetch those blocks and transactions from the peer. Since block sizes are\
\ relatively large, block transmission can optionally happen in 2 stages. On receiving\
\ the {\\tt inv} message, the node may ask for headers first using {\\tt getHeaders}\
\ and ask for complete blocks only if a header chain is established. This header-first\
\ block transmission increases queries but can decrease the net bandwidth usage.\
\ It may also prevent nodes from accepting PoW invalid blocks since the node can\
\ check from the header whether PoW is valid. \n\nWe saw in the previous lecture\
\ that some nodes might be malicious. A question that may arise is: what stops\
\ malicious nodes from flooding the network with invalid blocks and transactions\
\ (i.e., with invalid PoW and/or signatures)? Such flooding will saturate the\
\ network and increase transmission delay to unacceptable levels. Such an attack\
\ is prevented by a simple design decision, forward message to peers only after\
\ validating the message; i.e., a node sends an {\\tt inv} block message to its\
\ peers only after validating the block. If the adversary creates an invalid block,\
\ the block will not be propagated beyond one honest node. Additionally, nodes\
\ maintain their peers' reputation using some predefined heuristics; if a peer\
\ misbehaves (say by sending a transaction with invalid signatures), its reputation\
\ is downgraded and after a certain lower threshold is disconnected."
- source_sentence: How does the blockchain protocol ensure that all honest players
converge on the same chain?
sentences:
- "paper-title: Blockchain CAP Theorem Allows User-Dependent Adaptivity and Finality\n\
\nDefinition 3 (Potential starting value for period $p$ ). A value $v$ that has\
\ been next-voted by $t+1$ honest nodes for period $p-1$.\n\nDefinition 4 (Committed\
\ value for period $p$ ). A value $v$ that has been cert-voted by $2 t+1$ nodes\
\ for period $p$.\n\nDefinition 5 (Potentially committed value for period $p$\
\ ). A value $v$ that has been cert-voted by $t+1$ honest nodes for period $p$.\n\
\nAlthough we slightly altered Algorand BA protocol (which is highlighted in red\
\ in Appendix A), we note that our modification does not break the safety of the\
\ protocol or cause any deadlock in Lemma 1 and Lemma 2, At a high level, the\
\ validity check only causes less soft-votes from honest nodes, which is indistinguishable\
\ with the case where the leader is malicious and no value receives at least $2\
\ t+1$ soft-votes in some period. Therefore, the safety and deadlock-free property\
\ remain.\n\nLemma 1 (Asynchronous Safety, CP0). Even when the network is partitioned,\
\ the protocol ensures safety of the system so that no two honest nodes will finish\
\ one iteration of the protocol with different outputs.\n\nProof. The following\
\ properties hold even during a network partition.\n\n\\begin{itemize}\n \\item\
\ By quorum intersection, as each honest node only soft-votes one value, then\
\ at most one value is committed or potentially committed for each period $p$\
\ in one iteration.\n \\item If a value $v$ is potentially committed for period\
\ $p$, then only $v$ can receive $2 t+1$ next-votes for period $p$. Thus, the\
\ unique potential starting value for period $p+1$ is $v$.\n \\item If a period\
\ $p$ has a unique potential starting value $v \\neq \\perp$, then only $v$ can\
\ be committed for period $p$. Moreover, honest nodes will only next-vote $v$\
\ for period $p$, so the unique potential starting value for period $p+1$ is also\
\ $v$. Inductively, any future periods $p^{\\prime}>p$ can only have $v$ as a\
\ potential starting value. Thus, once a value is potentially committed, it becomes\
\ the unique value that can be committed or potentially committed for any future\
\ period, and no two honest nodes will finish this iteration of the protocol with\
\ different outputs.\n\\end{itemize}\n\nLemma 2 (Asynchronous Deadlock-freedom).\
\ As long as messages will be delivered eventually, an honest node can always\
\ leave period p, either by entering a higher period or meeting the halting condition\
\ for the current iteration.\n\nProof. We first prove that there can never exist\
\ $2 t+1$ next-votes for two different non- $\\perp$ values from the same period\
\ $p$ by induction.\n\nStart with $p=1$. Note that every honest node sets $s t_{i}^{1}=\\\
perp$ and at most one value (say $v$ ) could receive more than $2 t+1$ soft-votes.\
\ Therefore only value $v$ and $\\perp$ could potentially receive more than $2\
\ t+1$ next-votes in period 1 . Note that it is possible that both $v$ and $\\\
perp$ receive more than $2 t+1$ next-votes: all the honest nodes could next-vote\
\ for $\\perp$ in Step 4 and then next-vote for $v$ in Step 5 after seeing the\
\ $2 t+1$ soft-votes for $v$.\n\nAssume that the claim holds for period $p-1(p\
\ \\geq 2)$ : there exist at most two values each of which has $2 t+1$ next-votes\
\ for period $p-1$, and one of them is necessarily $\\perp$. Then there are three\
\ possible cases:"
- 'paper-title: A Scalable Proof-of-Stake Blockchain in the Open Setting * \\ (or,
How to Mimic Nakamoto''s Design via Proof-of-Stake)
Common prefix. Our analysis is based on the common prefix analysis of core-chain.
The core-chain can achieve common prefix as we discussed. The opportunity for
malicious players to destroy common prefix probability is to generate different
blockchain for the same core-chain. For the malicious players can sign different
blocks for one block-core, this will allow him to fork the blockchain. So the
malicious players can fork the blockchain when they are chosen to generate block.
However, with the property of hash function, the malicious players can not generate
two blocks with same hash value. When an honest player is chosen to extend a block,
he will only support one blockchain. Then all of the honest players will converge
on one blockchain.\\
Corollary 6.4 (Common prefix). Consider the blockchain protocol $\Pi^{\text {main
}}$. Consider $\alpha^{\star}=\lambda \beta^{\star}$, $\lambda>1$, and $\delta>0$.
Consider two honest PoS-players, P in round $r$ and $\mathrm{P}^{\prime}$ in round
$r^{\prime}$, with the local best PoS blockchains $\tilde{\mathcal{C}}, \tilde{\mathcal{C}}^{\prime}$,
respectively, where $r^{\prime} \geq r$. Then we have $\operatorname{Pr}\left[\tilde{\mathcal{C}}[1,
\ell] \preceq \tilde{\mathcal{C}}^{\prime}\right] \geq 1-e^{-\Omega(\kappa)}$,
where $\ell=\operatorname{len}(\mathcal{C})-\Theta(\kappa)$.
Proof. As we discussed, $\tilde{\mathcal{C}}$ and $\tilde{\mathcal{C}}^{\prime}$
are associated with core-chains $\mathcal{C}$ and $\mathcal{C}^{\prime}$ respectively.
From Corollary 5.6 we know that $\operatorname{Pr}\left[\mathcal{C}[1, \ell] \preceq
\mathcal{C}^{\prime}\right] \geq 1-e^{-\Omega(\kappa)}$.
Based on the assumption that $\alpha^{\star}=\lambda \beta^{\star}$ and $\lambda>1$,
we can have that the malicious players are not able to generate more than $\Theta(\kappa)$
blocks before an honest player is chosen to generate block with high probability.
All of the honest players will converge on the same chain. Put them together,
we have $\operatorname{Pr}\left[\tilde{\mathcal{C}}[1, \ell] \preceq \tilde{\mathcal{C}}^{\prime}\right]
\geq 1-e^{-\Omega(\kappa)}$ where $\ell=\operatorname{len}(\mathcal{C})-\Theta(\kappa)$.
Chain soundness. A new player will accept a blockchain (in which the corresponding
corechain is included). The proof idea for achieving chain soundness property
of our blockchain protocol directly follows that for the core-chain protocol.
We have the following statement.\\
Corollary 6.5 (Chain soundness). Consider the blockchain protocol $\Pi^{\text
{main }}$. Consider for every round, $\alpha=\lambda \beta, \lambda>1$, and $\delta>0$.
There are two honest PoS-players, $\mathrm{P}^{\prime}$ and $\mathrm{P}^{\prime
\prime}$ in round $r$, with the local best PoS blockchains $\tilde{\mathcal{C}}^{\prime}$
and $\tilde{\mathcal{C}}^{\prime \prime}$, respectively. Let $\mathrm{P}^{\prime}$
be a new player and $\mathrm{P}^{\prime \prime}$ be an existing player in round
$r$. Then we have $\tilde{\mathcal{C}}^{\prime}[\neg \kappa] \preceq \tilde{\mathcal{C}}^{\prime
\prime}$ and $\tilde{\mathcal{C}}^{\prime \prime}[\neg \kappa] \preceq \tilde{\mathcal{C}}^{\prime}$.'
- "Datatype: lecture_note, Title: Lecture 9: Scaling Latency\n\n\\begin{figure}\n\
\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Prism_main.pdf}\n\
\\end{center}\n\n\\caption{Factorizing the blocks into three types of blocks:\
\ proposer blocks, transaction blocks and voter blocks.}\n\\label{fig:prism}\n\
\n\\end{figure}\n\nJust as in {\\sf Prism 1.0}, the \\textit{proposer} blocktree\
\ in {\\sf Prism} anchors the blockchain. Each proposer block contains a list\
\ of reference links to \\textit{transaction} blocks that contain transactions,\
\ as well as a single reference to a parent proposer block. Honest nodes mine\
\ proposer blocks following the longest chain rule in the proposer tree.\nWe define\
\ the *level* of a proposer block as its distance from the genesis proposer block,\
\ and the *height* of the proposer tree as the maximum level that contains any\
\ proposer blocks. To determine the ordering of proposer blocks (and thus transaction\
\ blocks and transactions), we elect one \\textit{leader} proposer block from\
\ each level. The sequence of leader blocks up to the height of the proposer tree\
\ is called the \\textit{leader sequence}, and is determined by the *voter* chains.\
\ Note that the leader blocks do not need to follow the chain structure of the\
\ proposer blocks because otherwise deadlock may occur if conflicting blocks (i.e.,\
\ two proposer blocks not on one chain) are determined as leader blocks. \n\n\
In {\\sf Prism}, there are $m$ voter chains, where $m \\gg 1$ is a fixed parameter\
\ chosen by the system designer. The larger the $m$, the more parallel the voting\
\ process and hence the shorter the latency of confirmation. In general $m$ is\
\ chosen as large as network bandwidth and memory management issues are manageable.\
\ For example, $m=1000$ is chosen in the \\href{https://arxiv.org/pdf/1909.11261.pdf}{full-stack\
\ implementation} of Prism. New voter blocks are mined on each voter chain according\
\ to the longest chain rule. A voter block votes for a proposer block by containing\
\ a reference link to that proposer block, with the requirements that: (1) a vote\
\ is valid only if the voter block is in the longest chain of its voter tree;\
\ (2) each voter chain votes for one and only one proposer block at each level;\
\ (3) each voter block votes for all the proposer levels that have not been voted\
\ by its parent. The leader block at each level is the one that has the largest\
\ number of votes among all the proposer blocks at the same level (ties can be\
\ broken by the hash of the proposer blocks). The elected leader blocks then provide\
\ a unique ordering of the transaction blocks to form the final ledger. \n\n{\\\
sf Prism} also uses cryptographic sortition to prevent the adversary from focusing\
\ its mining power on a specific type of blocks or on a specific voter chain.\
\ A miner first forms a ``superblock\" containing $m+2$ parts: a transaction block,\
\ a proposer block and a voter block on the $i$-th voter tree ($1\\leq i \\leq\
\ m$). We say a superblock is successfully mined if \n\\begin{equation}\n \
\ Hash({\\sf nonce}, {\\sf superblock}) < T_{\\rm tx} + T_{\\rm prop} + m T_{\\\
rm v}. \n\\label{eq:sortition}\n\\end{equation}\nFurther, every successfully mined\
\ superblock is identified as a transaction block, a proposer block or a voter\
\ block based on the hash output: \n\n\n* identify the superblock as a proposer\
\ block if the hash output is less than $T_{\\rm prop}$;\n* identify the superblock\
\ as a transaction block if the hash output is in the range $[T_{\\rm prop}, T_{\\\
rm tx} + T_{\\rm prop})$;\n* identify the superblock as a voter block on the\
\ $i$-th voter tree ($1\\leq i \\leq m$) if the hash output is in the range $[T_{\\\
rm tx} + T_{\\rm prop} + (i-1) T_{\\rm v}, T_{\\rm tx} + T_{\\rm prop} + i T_{\\\
rm v} )$;"
- source_sentence: What is the role of the 2/3-GHOST function in the GRANDPA finality
gadget?
sentences:
- 'paper-title: GRANDPA: a Byzantine Finality Gadget
\subsection*{2.3 Preliminaries}
Network model : We will be using the partially synchronous network model introduced
by 7] and in particular the gossip network variant used in [5]. We assume that
any message sent or received by an honest participant reaches all honest participants
within time $T$, but possibly only after some Global Synchronisation Time GST.
Concretely, any message sent or received by some honest participant at time $t$
is received by all honest participants by time GST $+T$ at the latest.
Voters: For each voting step, there is a set of $n$ voters. We will frequently
need to assume that for each such step, at most $f<n / 3$ voters are Byzantine.
We need $n-f$ of voters to agree on finality. Whether or not block producers ever
vote, they will need to be participants who track the state of the protocol.
Votes: A vote is a block hash, together with some metadata such as round number
and the type of vote, such as prevote or precommit, all signed with a voter''s
private key.
Rounds: Each participant has their own idea of what is the current round number.
Every prevote and precommit has an associated round number. Honest voters only
vote once (for each type of vote) in each round and do not vote in earlier rounds
after later ones. Participants need to keep track of which block they see as currently
being the latest finalised block and an estimate of which block could have been
finalised in the last round.
For block $B$, we write chain $(B)$ for the chain whose head is $B$. The block
number, $n(B)$ of a block $B$ is the length of chain $(B)$. For blocks $B^{\prime}$
and $B$, we say $B$ is later than $B^{\prime}$ if it has a higher block number.
We write $B>B^{\prime}$ or that $B$ is descendant of $B^{\prime}$ for $B, B^{\prime}$
appearing in the same blockchain with $B^{\prime}$ later i.e. $B^{\prime} \in$
chain $(B)$ with $n(B)>n\left(B^{\prime}\right) . B \geq B^{\prime}$ and $B \leq
B^{\prime}$ are similar except allowing $B=B^{\prime}$. We write $B \sim B^{\prime}$
or $B$ and $B^{\prime}$ are on the same chain if $B<B^{\prime}, B=B^{\prime}$
or $B>B^{\prime}$; and $B \nsim B^{\prime}$ or $B$ and $B^{\prime}$ are not on
the same chain if there is no such chain.
Blocks are ordered as a tree with the genesis block as root. So any two blocks
have a common ancestor but two blocks not on the same chain do not have a common
descendant. A vote $v$ for a block $B$ by a voter $V$ is a message signed by $V$
containing the blockhash of $B$ and meta-information like the round numbers and
the type of vote.
A voter equivocates in a set of votes $S$ if they have cast multiple different
votes in $S$. We call a set $S$ of votes safe if the number of voters who equivocate
in $S$ is at most $f$. We say that $S$ has a supermajority for a block $B$ if
the set of voters who either have a vote for blocks $\geq B$ or equivocate in
$S$ has size at least $(n+f+1) / 2$. We count equivocations as votes for everything
so that observing a vote is monotonic, meaning that if $S \subset T$ then if $S$
has a supermajority for $B$ so does $T$, while being able to ignore yet more equivocating
votes from an equivocating voter.
For our finality gadget (GRANDPA) we use the ghost [13] eventual consensus algorithm
as $F$. The 2/3-GHOST function $g(S)$ takes a set $S$ of votes and returns the
block $B$ with highest block number such that $S$ has a supermajority for $B$.
If there is no such block, then it returns ''nil''. Note that, if $S$ is safe,
then we can compute $g(S)$ by starting at the genesis block and iteratively looking
for a child of our current block with a supermajority, which must be unique if
it exists. Thus we have:
Lemma 2.5. Let $T$ be a safe set of votes. Then'
- 'paper-title: Zexe: Enabling Decentralized Private Computation
In sum, proofs of predicates'' satisfiability are produced via a SNARK over $E_{\text
{BLS }}$, and proofs for the NP relation $\mathcal{R}_{\mathrm{e}}$ are produced
via a zkSNARK over $E_{\mathrm{CP}}$. The matching fields between the two curves
ensure that the former proofs can be efficiently verified.
Problem 3: Cocks-Pinch curves are costly. While the curve $E_{\mathrm{CP}}$ was
chosen to facilitate efficient checking of proofs over $E_{\mathrm{BLS}}$, the
curve $E_{\mathrm{CP}}$ is at least $2 \times$ more expensive (in time and space)
than $E_{\mathrm{BLS}}$ simply because $E_{\mathrm{CP}}$ ''s base field has about
twice as many bits as $E_{\mathrm{BLS}}$ ''s base field. Checks in the NP relation
$\mathcal{R}_{\mathrm{e}}$\\
that are not directly related to proof checking are now unnecessarily carried
over a less efficient curve.\\
Solution 3: split relations across two curves. We split $\mathcal{R}_{\mathrm{e}}$
into two NP relations $\mathcal{R}_{\mathrm{BLS}}$ and $\mathcal{R}_{\mathrm{CP}}$
(see Fig. 14), with the latter containing just the proof check and the former
containing all other checks. We can then use a zkSNARK over the curve $E_{\text
{BLS }}$ (an efficient curve) to produce proofs for $\mathcal{R}_{\mathrm{BLS}}$,
and a zkSNARK over $E_{\mathrm{CP}}$ (the less efficient curve) to produce proofs
for $\mathcal{R}_{\mathrm{CP}}$. This approach significantly reduces the running
time of DPC.Execute (producing proofs for the checks in $\mathcal{R}_{\mathrm{BLS}}$
is more efficient over $E_{\mathrm{BLS}}$ than over $E_{\mathrm{CP}}$ ), at the
expense of a modest increase in transaction size (a transaction now includes a
zkSNARK proof over $E_{\mathrm{BLS}}$ in addition to a proof over $E_{\mathrm{CP}}$
). An important technicality that must be addressed is that the foregoing split
relies on certain secret information to be shared across the NP relations, namely,
the identities of relevant predicates and the local data. We can store this information
in suitable commitments that are part of the NP instances for the two NP relations
(doing this efficiently requires some care as we discuss below).'
- 'paper-title: Ouroboros Praos: An adaptively-secure, semi-synchronous proof-of-stake
blockchain
where $\alpha_{\mathcal{H}}$ denotes the total relative stake of the honest parties.
Note that this bound applies to all static adversaries $\mathcal{A}$ that corrupt
no more than a $1-\alpha_{\mathcal{H}}$ fraction of all stake. With this in mind,
we define the dominant distribution as follows.\\
Definition 13 (The dominant distribution $\mathcal{D}_{\alpha}^{f}$ ). For two
parameters $f$ and $\alpha$, define $\mathcal{D}_{\alpha}^{f}$ to be the distribution
on strings $w \in\{0,1, \perp\}^{R}$ that independently assigns each $w_{i}$ so
that
\begin{align*}
p_{\perp} \triangleq \operatorname{Pr}\left[w_{i}\right. & =\perp]=1-f, \\
p_{0} \triangleq \operatorname{Pr}\left[w_{i}\right. & =0]=\phi(\alpha) \cdot(1-f),
\quad \text { and } \tag{9}\\
p_{1} \triangleq \operatorname{Pr}\left[w_{i}\right. & =1]=1-p_{\perp}-p_{0} .
\end{align*}
The distribution $\mathcal{D}_{\alpha}^{f}$ "dominates" $\mathcal{D}_{\mathcal{Z},
\mathcal{A}}^{f}$ for any static adversary $\mathcal{A}$ that corrupts no more
than a relative $1-\alpha$ share of the total stake, in the sense that nonempty
slots are more likely to be tainted under $\mathcal{D}_{\alpha}^{f}$ than they
are under $\mathcal{D}_{\mathcal{Z}, \mathcal{A}}^{f}$.
To make this relationship precise, we introduce the partial order $\preceq$ on
the set $\{\perp, 0,1\}$ so that $x \preceq y$ if and only if $x=y$ or $y=1$.
We extend this partial order to $\{\perp, 0,1\}^{R}$ by declaring $x_{1} \ldots
x_{R} \preceq y_{1} \ldots y_{R}$ if and only if $x_{i} \preceq y_{i}$ for each
$i$. Intuitively, the relationship $x \prec y$ asserts that $y$ is "more adversarial
than" $x$; concretely, any legal fork for $x$ is also a legal fork for $y$. Finally,
we define a notion of stochastic dominance for distributions on characteristic
strings, and $\alpha$-dominated adversaries.
Definition 14 (Stochastic dominance). We say that a subset $E \subseteq\{\perp,
0,1\}^{R}$ is monotone if $x \in E$ and $x \preceq y$ implies that $y \in E$.
Let $\mathcal{D}$ and $\mathcal{D}^{\prime}$ be two distributions on the set of
characteristic strings $\{\perp, 0,1\}^{R}$. Then we say that $\mathcal{D}^{\prime}$
dominates $\mathcal{D}$, written $\mathcal{D} \preceq \mathcal{D}^{\prime}$, if
$\operatorname{Pr}{ }_{\mathcal{D}}[E] \leq \operatorname{Pr}_{\mathcal{D}^{\prime}}[E]$
for every monotone set $E$. An adversary $\mathcal{A}$ is called $\alpha$-dominated
if the distribution $\mathcal{D}_{\mathcal{Z}, \mathcal{A}}^{f}$ that it induces
on the set of characteristic strings satisfies $\mathcal{D}_{\mathcal{Z}, \mathcal{A}}^{f}
\preceq \mathcal{D}_{\alpha}^{f}$.
As noted above, this notion of stochastic dominance is consistent with the chain-theoretic
definitions of interest, in the sense that failures of the abstract chain properties
form monotone events. We record this in the lemma below.'
- source_sentence: What does the paper conclude about the relationship between latency
and security in the Nakamoto Consensus protocol?
sentences:
- 'paper-title: Close Latency-Security Trade-off for the Nakamoto Consensus
Evidently, if the infinite sums in (2) and (10) are replaced by partial sums for
numerical evaluation, the resulting (tighter) security level remains unachievable.
\subsection*{3.1 Remarks}
Theorems 3.5 and 3.6 assume the delay $\Delta>0$. The bounds therein still apply
if we set $\Delta=0$, but are slightly looser than the bounds in Theorems 3.3
and 3.4 for the zero-delay case.
It is important to include the time of interest $s$ in Definitions 3.1 and 3.2.
The "bad events" for security breach depend on $s$ as well as the latency $t$.
These well-defined events are concerned with block mining times, not how blocks
form blockchains. ${ }^{3}$
We note that a number of previous analyses on the Nakamoto consensus assume a
finite lifespan of the protocol [1, 10], that is, a maximum round number is defined,
at which round the protocol terminates. The probability of consistency depends
on the maximum round number. In contrast, this paper does not assume a finite
lifespan. Theorem 3.5 states that, barring a small probability event, confirmed
blocks remain permanently in all miners'' longest blockchains into the arbitrary
future.
Even though we provide the same security guarantee for every blockchain after
the confirmation latency $t$, no one can simultaneously guarantee the same for
all blocks that will ever be confirmed.
\footnotetext{${ }^{3}$ To be rigorous, we do not make claims such as "the blockchain/protocol/system
satisfies consistency or liveness properties with probability ..." because those
properties themselves are not events in the probability space defined here.
}
\includegraphics[max width=\textwidth, center]{2025_01_02_447c9a776bd74bcc1f99g-04}
Figure 1: Bitcoin''s latency-security trade-off with $\alpha+\beta=$ $1 / 600$
blocks per second and $\Delta=10$ seconds.
This is a simple consequence of Murphy''s Law: If an adversary keeps trying new
episodes of attacks, with probability 1 a bad event will eventually occur to revert
some confirmed honest blocks.
For technical convenience, we regard a block in a miner''s longest blockchain
to be confirmed after a certain amount of time elapses since the block is mined
or enters the miner''s view. Nakamoto [22] originally proposed confirming a block
after it is sufficiently deep in an honest miner''s longest blockchain. We believe
both confirmation rules are easy to use in practice. And the two confirmation
rules imply each other in probability (see Appendix A for further discussion).
\subsection*{3.2 Numerical Examples}
The latency-security trade-off under several different sets of parameters is plotted
in Figure 1. The mining rate is set to Bitcoin''s one block per 600 seconds, or
$\alpha+\beta=1 / 600$ blocks/second. The propagation delay bound is assumed to
be $\Delta=10$ seconds. The latency upper and lower bounds are computed using
Theorems 3.5 and 3.6, respectively. In Figure 1, all bounds appear to be exponential
for all but very small latency and high error probabilities. This implies the
exponential bound (7) is a good approximation of (5) in Theorem 3.5 for the typical
range of parameters of interest here.
It is instructive to examine concrete data points in Figure 1: If the adversarial
share of the total network mining rate is $10 \%$ $(\alpha: \beta=9: 1)$, then
a confirmation time of four hours is sufficient to achieve $10^{-3}$ security
level, and a ten-hour confirmation achieves $10^{-9}$ security level. These results
are about two hours away from the corresponding lower bounds. Also, for every
additional hour of latency, the security improves by a factor of approximately
20 . If the adversarial share of the mining rate increases to $25 \%(\alpha: \beta=3:
1)$, then 10 hours 40 minutes and 28 hours 45 minutes of confirmation times achieve
$10^{-3}$ and $10^{-9}$ security levels, respectively, and the gap between the
upper and lower bounds is between five and seven hours. In general, the gap is
proportionally insignificant at high security levels but can be otherwise at low
security levels. For given mining rates, the gaps are similar at different security
levels. This indicates the lower bound (10) is also approximately exponential
with a slightly steeper exponent than that of the upper bound.'
- "paper-title: Ledger Combiners for Fast Settlement\n\n$$\n\\begin{aligned}\n\\\
delta\\left(\\operatorname{PoW}_{p}^{m}(x), \\mathrm{IPoW}_{p / m}^{m}(x)\\right)\
\ & =\\frac{1}{2} \\sum_{s \\in\\{0,1\\}^{m}}\\left|\\operatorname{Pr}\\left[\\\
operatorname{PoW}_{p}^{m}(x)=s\\right]-\\operatorname{Pr}\\left[\\operatorname{IPoW}_{p\
\ / m}^{m}(x)=s\\right]\\right| \\\\\n& =\\sum_{\\substack{s \\in\\{0,1)^{m} \\\
\\\n\\mathrm{hw}(s)=1}}\\left(\\operatorname{Pr}\\left[\\operatorname{PoW}_{p}^{m}(x)=s\\\
right]-\\operatorname{Pr}\\left[\\operatorname{IPoW}_{p / m}^{m}(x)=s\\right]\\\
right) \\\\\n& \\leq m \\cdot\\left[\\frac{p}{m}-\\frac{p}{m}\\left(1-\\frac{p}{m}\\\
right)^{m-1}\\right] \\leq p[1-(1-p)]=p^{2}\n\\end{aligned}\n$$\n\nas desired,\
\ where the last inequality follows by Bernoulli inequality.\n\nThe above lemma\
\ already justifies the use of $\\mathrm{PoW}_{p}^{m}$ for achieving subindependence\
\ in practical scenarios. To observe this, note that the use of $\\mathrm{IPoW}_{p\
\ / m}^{m}$ would lead to full independence of the individual PoW lotteries, and\
\ by Lemma 7 the real execution with $\\mathrm{PoW}_{p}^{m}$ will only differ\
\ from this ideal behavior with probability at most $Q \\cdot p^{2}$, where $Q$\
\ is the total number of PoW-queries. With current values of $p \\approx 10^{-22}$\
\ in e.g., Bitcoin ${ }^{2}$, and the block creation time adjusting to 10 minutes,\
\ this difference would manifest on expectation in about $10^{18}$ years. Note\
\ that any future increase of the total mining difficulty while maintaining the\
\ block creation time would only increase this period.\n\nNonetheless, in Appendix\
\ F we give a more detailed analysis of $\\mathrm{PoW}_{p}^{m}$ that shows that,\
\ loosely speaking, $m$ parallel executions of Bitcoin using PoW ${ }_{p}^{m}$\
\ as their shared PoW oracle achieve $\\varepsilon$-subindependence for $\\varepsilon$\
\ negligible in the security parameter.\n\n\\subsection*{4.2 Realizing Rank via\
\ Timestamped Blockchains}\nAn important consideration when deploying our virtual\
\ ledger construction over existing blockchains is how to realize the notion of\
\ rank. We note that typical Nakamoto-style PoS blockchains (e.g., the Ouroboros\
\ family, Snow White) assume a common notion of time among the participants and\
\ explicitly label blocks with slot numbers with a direct correspondence to absolute\
\ time. These slot numbers (or, preferably, a notion of common time associated\
\ with each slot number) directly afford a notion of rank that provides the desired\
\ persistence and liveness guarantees. To formalize this property, we introduce\
\ the notion of a timestamped blockchain.\n\nDefinition 11. A timestamped blockchain\
\ is one satisfying the following conventions:\n\n\\begin{itemize}\n \\item Block\
\ timestamps. Every block contains a declared timestamp.\n \\item Monotonicity.\
\ In order for a block to be considered valid, its timestamp can be no less than\
\ the timestamps of all prior blocks in the blockchain. (Thus valid blockchains\
\ consist of blocks in monotonically increasing order.)\n\\end{itemize}\n\nInformally,\
\ we say that an algorithm is a timestamped blockchain algorithm if it calls for\
\ participants to broadcast timestamped blockchains and to \"respect timestamps.\"\
\ More specifically, the algorithm satisfies the following:\n\n\\begin{itemize}\n\
\ \\item Faithful honest timestamping. Honest participants always post blocks\
\ with timestamps determined by their local clocks.\n \\item Ignore future blocks.\
\ Honest participants ignore blocks that contain a timestamp which is greater\
\ than their local time by more than a fixed constant. (These blocks might be\
\ considered later when the local clock of the participant \"catches up\" with\
\ the timestamp.)\n\\end{itemize}"
- "paper-title: A Scalable Proof-of-Stake Blockchain in the Open Setting * \\\\\
\ (or, How to Mimic Nakamoto's Design via Proof-of-Stake)\n\nLet $\\ell$ be the\
\ length of core-chain $\\mathcal{C}$. In our design, only the elected PoS-players\
\ are allowed to generate new block-cores (to extend the core-chain). Now, each\
\ registered PoS-player P will work on the right \"context\" which consists of\
\ the latest block-core in the longest corechain and the current time; formally\
\ context $:=\\left\\langle h^{\\text {prev }}\\right.$, round $\\rangle$ where\
\ $\\mathcal{C}[\\ell]$ is the latest blockcore in the longest core-chain $\\\
mathcal{C}$, and $h^{\\text {prev }}$ is the identity returned by the functionality\
\ $\\mathcal{F}_{\\text {rCERT }}$ for $\\mathcal{C}[\\ell]$, and round denotes\
\ the current time. The PoS-player P may query $\\mathcal{F}_{\\text {rCERT }}$\
\ by command (Elect, P , context, $\\mathcal{C}$ ) to see if he is selected to\
\ extend $\\mathcal{C}$. If the PoS-player P is selected (with certain probability\
\ $p$ ), he would receive a message (Elected, $\\mathrm{P}, h, \\sigma, \\mathrm{~b}$\
\ ) from $\\mathcal{F}_{\\text {rCERT }}$ such that $\\mathrm{b}=1$. Once receiving\
\ the signature $\\sigma$ from the functionality, the PoS-player P defines a new\
\ block-core $B:=\\left\\langle\\left\\langle h^{\\text {prev }}, h\\right.\\\
right.$, round $\\left.\\rangle, \\mathrm{P}, \\sigma\\right\\rangle$, updates\
\ his local core-chain $\\mathcal{C}$ and then broadcasts the local core-chain\
\ to the network. Please refer to Figure 3 for more details of our core-chain\
\ protocol.\n\nNote that here PoS-players have access to the functionality $\\\
mathcal{F}_{\\text {rCERT }}$. The players need to register to the functionality\
\ $\\mathcal{F}_{\\text {rCERT }}$ before querying the functionality.\n\nThe best\
\ core-chain strategy. Our proof-of-stake core-chain protocol $\\Pi^{\\text {core\
\ }}$ uses the subroutine BestCore to single out the best valid core-chain from\
\ a set of core-chains. Now we describe the rules of selecting the best core-chain.\
\ Roughly speaking, a core-chain is the best one if it is the current longest\
\ valid core-chain. The BestCore subroutine takes as input, a core-chain set $\\\
mathbb{C}^{\\prime}$ and the current time information round'. Intuitively, the\
\ subroutine validates all $\\mathcal{C} \\in \\mathbb{C}^{\\prime}$, then finds\
\ the valid longest core-chain.\n\nIn more detail, BestCore proceeds as follows.\
\ On input the current set of core-chains $\\mathbb{C}^{\\prime}$ and the current\
\ time information round', and for each core-chain $\\mathcal{C}$, the subroutine\
\ then evaluates every block-core of the core-chain $\\mathcal{C}$ sequentially.\
\ Let $\\ell$ be the length of $\\mathcal{C}$. Starting from the head of $\\mathcal{C}$,\
\ for every block-core $\\mathcal{C}[i]$, for all $i \\in[\\ell]$, in the core-chain\
\ $\\mathcal{C}$, the BestCore subroutine (1) ensures that $\\mathcal{C}[i]$ is\
\ linked to the previous block-core $\\mathcal{C}[i-1]$ correctly, and (2) tests\
\ if the\n\n\\section*{Protocol $\\Pi^{\\text {core }}$}\nInitially, a set $\\\
mathcal{P}_{0}$ of players are registered to the functionality $\\mathcal{F}_{\\\
text {rCERT }}$, where $\\mathcal{P}_{0} \\subseteq \\mathcal{P}$. Initially,\
\ for each $\\mathrm{P} \\in \\mathcal{P}$, set $\\mathcal{C}:=\\emptyset$, and\
\ state $:=\\emptyset$.\n\nUpon receiving message (Input-Stake, P ) from the environment\
\ $z$ at round round, the PoS-player $\\mathrm{P} \\in$ $\\mathcal{P}$, with local\
\ state state, proceeds as follows.\n\n\\begin{enumerate}\n \\item Select the\
\ best local PoS core-chain:\n\\end{enumerate}"
- source_sentence: What is the difference between absolute settlement and relative
settlement for transactions in a ledger?
sentences:
- 'paper-title: Ledger Combiners for Fast Settlement
Since the above requirements are formulated independently for each $t$, it is
well-defined to treat $\mathrm{C}[\cdot]$ as operating on ledgers rather than
dynamic ledgers; we sometimes overload the notation in this sense.
Looking ahead, our amplification combiner will consider $\mathrm{t}_{\mathrm{C}}\left(\mathbf{L}_{1}^{(t)},
\ldots, \mathbf{L}_{m}^{(t)}\right)=\bigcup_{i} \mathbf{L}_{i}^{(t)}$ along with
two related definitions of $\mathrm{a}_{\mathrm{C}}$ :
$$
\mathrm{a}_{\mathrm{C}}\left(A_{1}^{(t)}, \ldots, A_{m}^{(t)}\right)=\bigcup_{i}
A_{i}^{(t)} \quad \text { and } \quad \mathrm{a}_{\mathrm{C}}\left(A_{1}^{(t)},
\ldots, A_{m}^{(t)}\right)=\bigcap_{i} A_{i}^{(t)}
$$
see Section 3. The robust combiner will adopt a more sophisticated notion of $t_{c}$;
see Section 5 . In each of these cases, the important structural properties of
the construction are captured by the rank function $r_{C}$.
\subsection*{2.3 Transaction Validity and Settlement}
In the discussion below, we assume a general notion of transaction validity that
can be decided inductively: given a ledger $\mathbf{L}$, the validity of a transaction
$t x \in \mathbf{L}$ is determined by the transactions in the state $\mathbf{L}\lceil\operatorname{tx}\rceil$
of $\mathbf{L}$ up to tx and their ordering. Intuitively, only valid transactions
are then accounted for when interpreting the state of the ledger on the application
level. The canonical example of such a validity predicate in the case of so-called
UTXO transactions is formalized for completeness in Appendix B. Note that protocols
such as Bitcoin allow only valid transactions to enter the ledger; as the Bitcoin
ledger is represented by a simple chain it is possible to evaluate the validity
predicate upon block creation for each included transaction. This may not be the
case for more general ledgers, such as the result of applying one of our combiners
or various DAG-based constructions.
While we focus our analysis on persistence and liveness as given in Definition
3, our broader goal is to study settlement. Intuitively, settlement is the delay
necessary to ensure that a transaction included in some $A^{(t)}$ enters the dynamic
ledger and, furthermore, that its validity stabilizes for all future times.
Definition 5 (Absolute settlement). For a dynamic ledger $\mathbf{D} \stackrel{\text
{ def }}{=} \mathbf{L}^{(0)}, \mathbf{L}^{(1)}, \ldots$ we say that a transaction
$t x \in$ $A^{(\tau)} \cap \mathbf{L}^{(t)}($ for $\tau \leq t)$ is (absolutely)
settled at time $t$ iffor all $\ell \geq t$ we have: (i) $\mathbf{L}^{(t)}\lceil\mathrm{tx}\rceil
\subseteq \mathbf{L}^{(\ell)}$, (ii) the linear orders $<_{\mathbf{L}^{(t)}}$
and $<_{\mathbf{L}^{(t)}}$ agree on $\mathbf{L}^{(t)}\lceil\mathrm{tx}\rceil$,
and (iii) for any $\mathrm{tx}^{\prime} \in \mathbf{L}^{(e)}$ such that $\mathrm{tx}^{\prime}{<_{\mathbf{L}}(t)}
\mathrm{tx}$ we have $\mathrm{tx}^{\prime} \in \mathbf{L}^{(t)}\lceil\mathrm{tx}\rceil$.
Note that for any absolutely settled transaction, its validity is determined and
it is guaranteed to remain unchanged in the future.
It will be useful to also consider a weaker notion of relative settlement of a
transaction: Intuitively, tx is relatively settled at time $t$ if we have the
guarantee that no (conflicting) transaction $\mathrm{tx}^{\prime}$ that is not
part of the ledger at time $t$ can possibly eventually precede $t x$ in the ledger
ordering.'
- "paper-title: Casper the Friendly Finality Gadget\n\n\\documentclass[10pt]{article}\n\
\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{amsmath}\n\
\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage[version=4]{mhchem}\n\
\\usepackage{stmaryrd}\n\\usepackage{graphicx}\n\\usepackage[export]{adjustbox}\n\
\\graphicspath{ {./images/} }\n\\usepackage{hyperref}\n\\hypersetup{colorlinks=true,\
\ linkcolor=blue, filecolor=magenta, urlcolor=cyan,}\n\\urlstyle{same}\n\n\\title{Casper\
\ the Friendly Finality Gadget }\n\n\\author{Vitalik Buterin and Virgil Griffith\\\
\\\nEthereum Foundation}\n\\date{}\n\n\n%New command to display footnote whose\
\ markers will always be hidden\n\\let\\svthefootnote\\thefootnote\n\\newcommand\\\
blfootnotetext[1]{%\n \\let\\thefootnote\\relax\\footnote{#1}%\n \\addtocounter{footnote}{-1}%\n\
\ \\let\\thefootnote\\svthefootnote%\n}\n\n%Overriding the \\footnotetext command\
\ to hide the marker if its value is `0`\n\\let\\svfootnotetext\\footnotetext\n\
\\renewcommand\\footnotetext[2][?]{%\n \\if\\relax#1\\relax%\n \\ifnum\\value{footnote}=0\\\
blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else%\n \\if?#1\\ifnum\\\
value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else\\\
svfootnotetext[#1]{#2}\\fi%\n \\fi\n}\n\n\\begin{document}\n\\maketitle\n\n\n\
\\begin{abstract}\nWe introduce Casper, a proof of stake-based finality system\
\ which overlays an existing proof of work blockchain. Casper is a partial consensus\
\ mechanism combining proof of stake algorithm research and Byzantine fault tolerant\
\ consensus theory. We introduce our system, prove some desirable features, and\
\ show defenses against long range revisions and catastrophic crashes. The Casper\
\ overlay provides almost any proof of work chain with additional protections\
\ against block reversions.\n\\end{abstract}\n\n\\section*{1. Introduction}\n\
Over the past few years there has been considerable research into \"proof of stake\"\
\ (PoS) based blockchain consensus algorithms. In a PoS system, a blockchain appends\
\ and agrees on new blocks through a process where anyone who holds coins inside\
\ of the system can participate, and the influence an agent has is proportional\
\ to the number of coins (or \"stake\") it holds. This is a vastly more efficient\
\ alternative to proof of work (PoW) \"mining\" and enables blockchains to operate\
\ without mining's high hardware and electricity costs.\\\\[0pt]\nThere are two\
\ major schools of thought in PoS design. The first, chain-based proof of stake[1,\
\ 2], mimics proof of work mechanics and features a chain of blocks and simulates\
\ mining by pseudorandomly assigning the right to create new blocks to stakeholders.\
\ This includes Peercoin[3], Blackcoin[4], and Iddo Bentov's work[5].\\\\[0pt]\n\
The other school, Byzantine fault tolerant (BFT) based proof of stake, is based\
\ on a thirty-year-old body of research into BFT consensus algorithms such as\
\ PBFT[6]. BFT algorithms typically have proven mathematical properties; for example,\
\ one can usually mathematically prove that as long as $>\\frac{2}{3}$ of protocol\
\ participants are following the protocol honestly, then, regardless of network\
\ latency, the algorithm cannot finalize conflicting blocks. Repurposing BFT algorithms\
\ for proof of stake was first introduced by Tendermint[7], and has modern inspirations\
\ such as [8]. Casper follows this BFT tradition, though with some modifications.\n\
\n\\subsection*{1.1. Our Work}\nCasper the Friendly Finality Gadget is an overlay\
\ atop a proposal mechanism-a mechanism which proposes blocks ${ }^{1}$. Casper\
\ is responsible for finalizing these blocks, essentially selecting a unique chain\
\ which represents the canonical transactions of the ledger. Casper provides safety,\
\ but liveness depends on the chosen proposal mechanism. That is, if attackers\
\ wholly control the proposal mechanism, Casper protects against finalizing two\
\ conflicting checkpoints, but the attackers could prevent Casper from finalizing\
\ any future checkpoints.\\\\\nCasper introduces several new features that BFT\
\ algorithms do not necessarily support:"
- 'paper-title: Bitcoin and Cryptocurrency Technologies
Interestingly, these concerns have an analogy in the realm of voting. It''s illegal
in the United States and many other nations for individuals to sell their vote.
Arguably participating in a pool controlled by someone else is akin to selling
your vote in the Bitcoin consensus protocol.
Technical requirements for pools. Recall that mining pools appear to be an emergent
phenomenon. There''s no evidence that Satoshi was thinking of mining pools at
the time of Bitcoin''s original design. It wasn''t apparent for a few years that
efficient pools could be run between many individuals who don''t know or trust
each other.
As we saw in Chapter 5, mining pools typically work by designating a pool operator
with a well-known public key. Each of the participating miners mines as usual
but sends in shares to the pool operator. These shares are "near misses" or "partial
solutions" which would be valid solutions at a lower difficulty level. This shows
the pool operator how much work the miner is performing. Whenever one of the pool
participants finds a valid block, the pool operator then distributes the rewards
amongst the pool participants based on the number of shares they have submitted.
As we discussed in Chapter 5, there are many formulas for dividing the revenue
up, but all mining pools follow this basic structure.
The existence of pools thus relies on at least two technical properties of Bitcoin.
The first is that it''s easy for a miner to prove (probabilistically) how much
work they are doing by submitting shares. By choosing a low enough threshold for
shares, miners can easily prove how much work they are performing with arbitrary
precision regardless of the actual difficulty of finding an valid block. This
facet of mining puzzles appears difficult to change, given that we need a puzzle
that can be created with arbitrary difficulty.
Second, pool members can easily prove to the pool operator that they''re following
the rules and working to find valid blocks which would reward the pool as a whole.
This works because the pool''s public key is committed to in the coinbase transaction
included in the block''s Merkle tree of transactions. Once a miner finds a block
or even a share, they can''t change which public key is the recipient of the newly
minted coins.
Block discarding attacks. There is one weakness in this scheme for implementing
mining pools: there is nothing to to enforce that participating miners actually
submit valid blocks to the pool manager in the event that they find them. Suppose
that there''s a pool member that''s upset with a large mining pool. They can participate
in the pool by mining and submitting shares just like normal, but in the event
that they actually find a valid block that would reward the pool they simply discard
it and don''t tell the pool operator about it.
This attack reduces the pool''s overall mining power as none of the attacker''s
work is contributing towards finding valid blocks. However the attacker will still
be rewarded as they appear to be submitting valid shares and simply getting unlucky
to not find any valid blocks. If the mining pool is designed to be revenue-neutral
(that is, all mining rewards are redistributed back to participants) then this
attack can cause the pool to run at a loss.
This attack is sometimes called a vigilante or sabotage attack and is considered
a form of vandalism because the attack appears to be costly for both the attacker
and the pool. The attacker loses money because every block they discard would
have led to some proportion of the block rewards being returned to them. Of course,
the attacker still gets rewards for other puzzle solutions that are found.
It appears that a rational attacker wouldn''t employ this strategy, since they
would lose money without gaining anything tangible. It turns out (quite surprisingly)
that there are cases where this strategy can be profitable, as discussed in the
box below. But in any case, we want to design an entirely new mining puzzle formulation
that ensures this strategy is always profitable.
Sidebar: block discarding attacks between pools. People assumed for years that
it can''t be profitable for a participant to discard valid blocks found on behalf
of the pool. It turns out this strategy can be profitable if one mining pool uses
it to attack another. This was proposed apocryphally many times and first thoroughly
analyzed in a paper by Ittay Eyal in 2015.
Let''s consider a simple case: suppose two mining pools, $A$ and $B$, each have
$50 \%$ of the total mining capacity. Now suppose B uses half of its mining power
( $25 \%$ of the total capacity) to mine as a member in pool A, but discards all
blocks found. We can show, in a simplified model, that B will now earns $5 / 9$
of the total rewards, greater than the $50 \%$ it would earn by mining normally.
In this simple case, dedicating half of its mining power to attacking can be shown
to be the optimal strategy for pool B.'
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7857142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8571428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8571428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26190476190476186
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17142857142857146
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08571428571428573
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7857142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8571428571428571
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8571428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7032219246239031
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6511904761904762
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6553083095766022
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5714285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7857142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8214285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8571428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5714285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26190476190476186
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1642857142857143
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08571428571428573
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5714285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7857142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8214285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8571428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7276726753008987
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6848639455782314
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6886316064887493
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5714285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7857142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8214285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8571428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5714285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26190476190476186
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1642857142857143
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08571428571428573
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5714285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7857142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8214285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8571428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7284895986499949
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6857142857142858
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6893267651888342
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.5
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.75
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8214285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8571428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.24999999999999997
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1642857142857143
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08571428571428573
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.75
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8214285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8571428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6935204558400861
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6395833333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6425405844155845
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.42857142857142855
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6785714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.75
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8214285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.42857142857142855
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22619047619047614
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15000000000000005
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08214285714285716
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.42857142857142855
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6785714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.75
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8214285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.631592589549331
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5696428571428572
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5757306413556414
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mahsaBa76/bge-base-custom-matryoshka")
# Run inference
sentences = [
'What is the difference between absolute settlement and relative settlement for transactions in a ledger?',
'paper-title: Ledger Combiners for Fast Settlement\n\nSince the above requirements are formulated independently for each $t$, it is well-defined to treat $\\mathrm{C}[\\cdot]$ as operating on ledgers rather than dynamic ledgers; we sometimes overload the notation in this sense.\n\nLooking ahead, our amplification combiner will consider $\\mathrm{t}_{\\mathrm{C}}\\left(\\mathbf{L}_{1}^{(t)}, \\ldots, \\mathbf{L}_{m}^{(t)}\\right)=\\bigcup_{i} \\mathbf{L}_{i}^{(t)}$ along with two related definitions of $\\mathrm{a}_{\\mathrm{C}}$ :\n\n$$\n\\mathrm{a}_{\\mathrm{C}}\\left(A_{1}^{(t)}, \\ldots, A_{m}^{(t)}\\right)=\\bigcup_{i} A_{i}^{(t)} \\quad \\text { and } \\quad \\mathrm{a}_{\\mathrm{C}}\\left(A_{1}^{(t)}, \\ldots, A_{m}^{(t)}\\right)=\\bigcap_{i} A_{i}^{(t)}\n$$\n\nsee Section 3. The robust combiner will adopt a more sophisticated notion of $t_{c}$; see Section 5 . In each of these cases, the important structural properties of the construction are captured by the rank function $r_{C}$.\n\n\\subsection*{2.3 Transaction Validity and Settlement}\nIn the discussion below, we assume a general notion of transaction validity that can be decided inductively: given a ledger $\\mathbf{L}$, the validity of a transaction $t x \\in \\mathbf{L}$ is determined by the transactions in the state $\\mathbf{L}\\lceil\\operatorname{tx}\\rceil$ of $\\mathbf{L}$ up to tx and their ordering. Intuitively, only valid transactions are then accounted for when interpreting the state of the ledger on the application level. The canonical example of such a validity predicate in the case of so-called UTXO transactions is formalized for completeness in Appendix B. Note that protocols such as Bitcoin allow only valid transactions to enter the ledger; as the Bitcoin ledger is represented by a simple chain it is possible to evaluate the validity predicate upon block creation for each included transaction. This may not be the case for more general ledgers, such as the result of applying one of our combiners or various DAG-based constructions.\n\nWhile we focus our analysis on persistence and liveness as given in Definition 3, our broader goal is to study settlement. Intuitively, settlement is the delay necessary to ensure that a transaction included in some $A^{(t)}$ enters the dynamic ledger and, furthermore, that its validity stabilizes for all future times.\n\nDefinition 5 (Absolute settlement). For a dynamic ledger $\\mathbf{D} \\stackrel{\\text { def }}{=} \\mathbf{L}^{(0)}, \\mathbf{L}^{(1)}, \\ldots$ we say that a transaction $t x \\in$ $A^{(\\tau)} \\cap \\mathbf{L}^{(t)}($ for $\\tau \\leq t)$ is (absolutely) settled at time $t$ iffor all $\\ell \\geq t$ we have: (i) $\\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil \\subseteq \\mathbf{L}^{(\\ell)}$, (ii) the linear orders $<_{\\mathbf{L}^{(t)}}$ and $<_{\\mathbf{L}^{(t)}}$ agree on $\\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil$, and (iii) for any $\\mathrm{tx}^{\\prime} \\in \\mathbf{L}^{(e)}$ such that $\\mathrm{tx}^{\\prime}{<_{\\mathbf{L}}(t)} \\mathrm{tx}$ we have $\\mathrm{tx}^{\\prime} \\in \\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil$.\n\nNote that for any absolutely settled transaction, its validity is determined and it is guaranteed to remain unchanged in the future.\n\nIt will be useful to also consider a weaker notion of relative settlement of a transaction: Intuitively, tx is relatively settled at time $t$ if we have the guarantee that no (conflicting) transaction $\\mathrm{tx}^{\\prime}$ that is not part of the ledger at time $t$ can possibly eventually precede $t x$ in the ledger ordering.',
'paper-title: Casper the Friendly Finality Gadget\n\n\\documentclass[10pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage[version=4]{mhchem}\n\\usepackage{stmaryrd}\n\\usepackage{graphicx}\n\\usepackage[export]{adjustbox}\n\\graphicspath{ {./images/} }\n\\usepackage{hyperref}\n\\hypersetup{colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan,}\n\\urlstyle{same}\n\n\\title{Casper the Friendly Finality Gadget }\n\n\\author{Vitalik Buterin and Virgil Griffith\\\\\nEthereum Foundation}\n\\date{}\n\n\n%New command to display footnote whose markers will always be hidden\n\\let\\svthefootnote\\thefootnote\n\\newcommand\\blfootnotetext[1]{%\n \\let\\thefootnote\\relax\\footnote{#1}%\n \\addtocounter{footnote}{-1}%\n \\let\\thefootnote\\svthefootnote%\n}\n\n%Overriding the \\footnotetext command to hide the marker if its value is `0`\n\\let\\svfootnotetext\\footnotetext\n\\renewcommand\\footnotetext[2][?]{%\n \\if\\relax#1\\relax%\n \\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else%\n \\if?#1\\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else\\svfootnotetext[#1]{#2}\\fi%\n \\fi\n}\n\n\\begin{document}\n\\maketitle\n\n\n\\begin{abstract}\nWe introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions.\n\\end{abstract}\n\n\\section*{1. Introduction}\nOver the past few years there has been considerable research into "proof of stake" (PoS) based blockchain consensus algorithms. In a PoS system, a blockchain appends and agrees on new blocks through a process where anyone who holds coins inside of the system can participate, and the influence an agent has is proportional to the number of coins (or "stake") it holds. This is a vastly more efficient alternative to proof of work (PoW) "mining" and enables blockchains to operate without mining\'s high hardware and electricity costs.\\\\[0pt]\nThere are two major schools of thought in PoS design. The first, chain-based proof of stake[1, 2], mimics proof of work mechanics and features a chain of blocks and simulates mining by pseudorandomly assigning the right to create new blocks to stakeholders. This includes Peercoin[3], Blackcoin[4], and Iddo Bentov\'s work[5].\\\\[0pt]\nThe other school, Byzantine fault tolerant (BFT) based proof of stake, is based on a thirty-year-old body of research into BFT consensus algorithms such as PBFT[6]. BFT algorithms typically have proven mathematical properties; for example, one can usually mathematically prove that as long as $>\\frac{2}{3}$ of protocol participants are following the protocol honestly, then, regardless of network latency, the algorithm cannot finalize conflicting blocks. Repurposing BFT algorithms for proof of stake was first introduced by Tendermint[7], and has modern inspirations such as [8]. Casper follows this BFT tradition, though with some modifications.\n\n\\subsection*{1.1. Our Work}\nCasper the Friendly Finality Gadget is an overlay atop a proposal mechanism-a mechanism which proposes blocks ${ }^{1}$. Casper is responsible for finalizing these blocks, essentially selecting a unique chain which represents the canonical transactions of the ledger. Casper provides safety, but liveness depends on the chosen proposal mechanism. That is, if attackers wholly control the proposal mechanism, Casper protects against finalizing two conflicting checkpoints, but the attackers could prevent Casper from finalizing any future checkpoints.\\\\\nCasper introduces several new features that BFT algorithms do not necessarily support:',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.5 | 0.5714 | 0.5714 | 0.5 | 0.4286 |
| cosine_accuracy@3 | 0.7857 | 0.7857 | 0.7857 | 0.75 | 0.6786 |
| cosine_accuracy@5 | 0.8571 | 0.8214 | 0.8214 | 0.8214 | 0.75 |
| cosine_accuracy@10 | 0.8571 | 0.8571 | 0.8571 | 0.8571 | 0.8214 |
| cosine_precision@1 | 0.5 | 0.5714 | 0.5714 | 0.5 | 0.4286 |
| cosine_precision@3 | 0.2619 | 0.2619 | 0.2619 | 0.25 | 0.2262 |
| cosine_precision@5 | 0.1714 | 0.1643 | 0.1643 | 0.1643 | 0.15 |
| cosine_precision@10 | 0.0857 | 0.0857 | 0.0857 | 0.0857 | 0.0821 |
| cosine_recall@1 | 0.5 | 0.5714 | 0.5714 | 0.5 | 0.4286 |
| cosine_recall@3 | 0.7857 | 0.7857 | 0.7857 | 0.75 | 0.6786 |
| cosine_recall@5 | 0.8571 | 0.8214 | 0.8214 | 0.8214 | 0.75 |
| cosine_recall@10 | 0.8571 | 0.8571 | 0.8571 | 0.8571 | 0.8214 |
| **cosine_ndcg@10** | **0.7032** | **0.7277** | **0.7285** | **0.6935** | **0.6316** |
| cosine_mrr@10 | 0.6512 | 0.6849 | 0.6857 | 0.6396 | 0.5696 |
| cosine_map@100 | 0.6553 | 0.6886 | 0.6893 | 0.6425 | 0.5757 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 278 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 278 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 26.06 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 512 tokens</li><li>mean: 512.0 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>How does ByzCoin ensure that microblock chains remain consistent even in the presence of keyblock conflicts?</code> | <code>paper-title: Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing<br><br>Figure 3: ByzCoin blockchain: Two parallel chains store information about the leaders (keyblocks) and the transactions (microblocks)\\<br>becomes two separate parallel blockchains, as shown in Fig. 3. The main blockchain is the keyblock chain, consisting of all mined blocks. The microblock chain is a secondary blockchain that depends on the primary to identify the era in which every microblock belongs to, i.e., which miners are authoritative to sign it and who is the leader of the era.<br><br>Microblocks. A microblock is a simple block that the current consensus group produces every few seconds to represent newly-committed transactions. Each microblock includes a set of transactions and a collective signature. Each microblock also includes hashes referring to the previous microblock and keyblock: the former to ensure total ordering, and the latter indicating which consensus group window and l...</code> |
| <code>What are the primary ways in which Bitcoin users can be deanonymized, and why is network-layer deanonymization particularly concerning?</code> | <code>paper-title: Bitcoin and Cryptocurrency Technologies<br><br>This is is exactly what the Fistful of Bitcoins researchers (and others since) have done. They bought a variety of things, joined mining pools, used Bitcoin exchanges, wallet services, and gambling sites, and interacted in a variety of other ways with service providers, compromising 344 transactions in all.<br><br>In Figure 6.5, we again show the clusters of Figure 6.4, but this times with the labels attached. While our guesses about Mt. gox and Satoshi Dice were correct, the researchers were able to identify numerous other service providers that would have been hard to identify without transacting with them.\\<br>\includegraphics[max width=\textwidth, center]{2025_01_02_05ab7f20e06e1a41e145g-175}<br><br>Figure 6.5. Labeled clusters. By transacting with various Bitcoin service providers, Meiklejohn et al. were able to attach real world identities to their clusters.<br><br>Identifying individuals. The next question is: can we do the same thing for indivi...</code> |
| <code>What is the main purpose of the ledger indistinguishability and transaction non-malleability properties in the Zerocash protocol?</code> | <code>paper-title: Zerocash: Decentralized Anonymous Payments from Bitcoin<br><br>Ledger indistinguishability is formalized by an experiment L-IND that proceeds as follows. First, a challenger samples a random bit $b$ and initializes two DAP scheme oracles $\mathcal{O}_{0}^{\text {DAP }}$ and $\mathcal{O}_{1}^{\text {DAP }}$, maintaining ledgers $L_{0}$ and $L_{1}$. Throughout, the challenger allows $\mathcal{A}$ to issue queries to $\mathcal{O}_{0}^{\text {DAP }}$ and $\mathcal{O}_{1}^{\text {DAP }}$, thus controlling the behavior of honest parties on $L_{0}$ and $L_{1}$. The challenger provides the adversary with the view of both ledgers, but in randomized order: $L_{\text {Left }}:=L_{b}$ and $L_{\text {Right }}:=L_{1-b}$. The adversary's goal is to distinguish whether the view he sees corresponds to $\left(L_{\text {Left }}, L_{\text {Right }}\right)=\left(L_{0}, L_{1}\right)$, i.e. $b=0$, or to $\left(L_{\text {Left }}, L_{\text {Right }}\right)=\left(L_{1}, L_{0}\right)$, i.e. $b=1$.<br><br>At eac...</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-----:|:----:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 1.0 | 1 | 0.6975 | 0.6930 | 0.6760 | 0.6960 | 0.6098 |
| 2.0 | 2 | 0.7258 | 0.7082 | 0.7062 | 0.6935 | 0.6231 |
| 3.0 | 3 | 0.7079 | 0.7270 | 0.7067 | 0.6935 | 0.6184 |
| 4.0 | 4 | 0.7032 | 0.7277 | 0.7285 | 0.6935 | 0.6316 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.5.1+cu118
- Accelerate: 1.2.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mahsaBa76/bge-base-custom-matryoshka")
# Run inference
sentences = [
'What is the difference between absolute settlement and relative settlement for transactions in a ledger?',
'paper-title: Ledger Combiners for Fast Settlement\n\nSince the above requirements are formulated independently for each $t$, it is well-defined to treat $\\mathrm{C}[\\cdot]$ as operating on ledgers rather than dynamic ledgers; we sometimes overload the notation in this sense.\n\nLooking ahead, our amplification combiner will consider $\\mathrm{t}_{\\mathrm{C}}\\left(\\mathbf{L}_{1}^{(t)}, \\ldots, \\mathbf{L}_{m}^{(t)}\\right)=\\bigcup_{i} \\mathbf{L}_{i}^{(t)}$ along with two related definitions of $\\mathrm{a}_{\\mathrm{C}}$ :\n\n$$\n\\mathrm{a}_{\\mathrm{C}}\\left(A_{1}^{(t)}, \\ldots, A_{m}^{(t)}\\right)=\\bigcup_{i} A_{i}^{(t)} \\quad \\text { and } \\quad \\mathrm{a}_{\\mathrm{C}}\\left(A_{1}^{(t)}, \\ldots, A_{m}^{(t)}\\right)=\\bigcap_{i} A_{i}^{(t)}\n$$\n\nsee Section 3. The robust combiner will adopt a more sophisticated notion of $t_{c}$; see Section 5 . In each of these cases, the important structural properties of the construction are captured by the rank function $r_{C}$.\n\n\\subsection*{2.3 Transaction Validity and Settlement}\nIn the discussion below, we assume a general notion of transaction validity that can be decided inductively: given a ledger $\\mathbf{L}$, the validity of a transaction $t x \\in \\mathbf{L}$ is determined by the transactions in the state $\\mathbf{L}\\lceil\\operatorname{tx}\\rceil$ of $\\mathbf{L}$ up to tx and their ordering. Intuitively, only valid transactions are then accounted for when interpreting the state of the ledger on the application level. The canonical example of such a validity predicate in the case of so-called UTXO transactions is formalized for completeness in Appendix B. Note that protocols such as Bitcoin allow only valid transactions to enter the ledger; as the Bitcoin ledger is represented by a simple chain it is possible to evaluate the validity predicate upon block creation for each included transaction. This may not be the case for more general ledgers, such as the result of applying one of our combiners or various DAG-based constructions.\n\nWhile we focus our analysis on persistence and liveness as given in Definition 3, our broader goal is to study settlement. Intuitively, settlement is the delay necessary to ensure that a transaction included in some $A^{(t)}$ enters the dynamic ledger and, furthermore, that its validity stabilizes for all future times.\n\nDefinition 5 (Absolute settlement). For a dynamic ledger $\\mathbf{D} \\stackrel{\\text { def }}{=} \\mathbf{L}^{(0)}, \\mathbf{L}^{(1)}, \\ldots$ we say that a transaction $t x \\in$ $A^{(\\tau)} \\cap \\mathbf{L}^{(t)}($ for $\\tau \\leq t)$ is (absolutely) settled at time $t$ iffor all $\\ell \\geq t$ we have: (i) $\\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil \\subseteq \\mathbf{L}^{(\\ell)}$, (ii) the linear orders $<_{\\mathbf{L}^{(t)}}$ and $<_{\\mathbf{L}^{(t)}}$ agree on $\\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil$, and (iii) for any $\\mathrm{tx}^{\\prime} \\in \\mathbf{L}^{(e)}$ such that $\\mathrm{tx}^{\\prime}{<_{\\mathbf{L}}(t)} \\mathrm{tx}$ we have $\\mathrm{tx}^{\\prime} \\in \\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil$.\n\nNote that for any absolutely settled transaction, its validity is determined and it is guaranteed to remain unchanged in the future.\n\nIt will be useful to also consider a weaker notion of relative settlement of a transaction: Intuitively, tx is relatively settled at time $t$ if we have the guarantee that no (conflicting) transaction $\\mathrm{tx}^{\\prime}$ that is not part of the ledger at time $t$ can possibly eventually precede $t x$ in the ledger ordering.',
'paper-title: Casper the Friendly Finality Gadget\n\n\\documentclass[10pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage[version=4]{mhchem}\n\\usepackage{stmaryrd}\n\\usepackage{graphicx}\n\\usepackage[export]{adjustbox}\n\\graphicspath{ {./images/} }\n\\usepackage{hyperref}\n\\hypersetup{colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan,}\n\\urlstyle{same}\n\n\\title{Casper the Friendly Finality Gadget }\n\n\\author{Vitalik Buterin and Virgil Griffith\\\\\nEthereum Foundation}\n\\date{}\n\n\n%New command to display footnote whose markers will always be hidden\n\\let\\svthefootnote\\thefootnote\n\\newcommand\\blfootnotetext[1]{%\n \\let\\thefootnote\\relax\\footnote{#1}%\n \\addtocounter{footnote}{-1}%\n \\let\\thefootnote\\svthefootnote%\n}\n\n%Overriding the \\footnotetext command to hide the marker if its value is `0`\n\\let\\svfootnotetext\\footnotetext\n\\renewcommand\\footnotetext[2][?]{%\n \\if\\relax#1\\relax%\n \\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else%\n \\if?#1\\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else\\svfootnotetext[#1]{#2}\\fi%\n \\fi\n}\n\n\\begin{document}\n\\maketitle\n\n\n\\begin{abstract}\nWe introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions.\n\\end{abstract}\n\n\\section*{1. Introduction}\nOver the past few years there has been considerable research into "proof of stake" (PoS) based blockchain consensus algorithms. In a PoS system, a blockchain appends and agrees on new blocks through a process where anyone who holds coins inside of the system can participate, and the influence an agent has is proportional to the number of coins (or "stake") it holds. This is a vastly more efficient alternative to proof of work (PoW) "mining" and enables blockchains to operate without mining\'s high hardware and electricity costs.\\\\[0pt]\nThere are two major schools of thought in PoS design. The first, chain-based proof of stake[1, 2], mimics proof of work mechanics and features a chain of blocks and simulates mining by pseudorandomly assigning the right to create new blocks to stakeholders. This includes Peercoin[3], Blackcoin[4], and Iddo Bentov\'s work[5].\\\\[0pt]\nThe other school, Byzantine fault tolerant (BFT) based proof of stake, is based on a thirty-year-old body of research into BFT consensus algorithms such as PBFT[6]. BFT algorithms typically have proven mathematical properties; for example, one can usually mathematically prove that as long as $>\\frac{2}{3}$ of protocol participants are following the protocol honestly, then, regardless of network latency, the algorithm cannot finalize conflicting blocks. Repurposing BFT algorithms for proof of stake was first introduced by Tendermint[7], and has modern inspirations such as [8]. Casper follows this BFT tradition, though with some modifications.\n\n\\subsection*{1.1. Our Work}\nCasper the Friendly Finality Gadget is an overlay atop a proposal mechanism-a mechanism which proposes blocks ${ }^{1}$. Casper is responsible for finalizing these blocks, essentially selecting a unique chain which represents the canonical transactions of the ledger. Casper provides safety, but liveness depends on the chosen proposal mechanism. That is, if attackers wholly control the proposal mechanism, Casper protects against finalizing two conflicting checkpoints, but the attackers could prevent Casper from finalizing any future checkpoints.\\\\\nCasper introduces several new features that BFT algorithms do not necessarily support:',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.5 | 0.5714 | 0.5714 | 0.5 | 0.4286 |
| cosine_accuracy@3 | 0.7857 | 0.7857 | 0.7857 | 0.75 | 0.6786 |
| cosine_accuracy@5 | 0.8571 | 0.8214 | 0.8214 | 0.8214 | 0.75 |
| cosine_accuracy@10 | 0.8571 | 0.8571 | 0.8571 | 0.8571 | 0.8214 |
| cosine_precision@1 | 0.5 | 0.5714 | 0.5714 | 0.5 | 0.4286 |
| cosine_precision@3 | 0.2619 | 0.2619 | 0.2619 | 0.25 | 0.2262 |
| cosine_precision@5 | 0.1714 | 0.1643 | 0.1643 | 0.1643 | 0.15 |
| cosine_precision@10 | 0.0857 | 0.0857 | 0.0857 | 0.0857 | 0.0821 |
| cosine_recall@1 | 0.5 | 0.5714 | 0.5714 | 0.5 | 0.4286 |
| cosine_recall@3 | 0.7857 | 0.7857 | 0.7857 | 0.75 | 0.6786 |
| cosine_recall@5 | 0.8571 | 0.8214 | 0.8214 | 0.8214 | 0.75 |
| cosine_recall@10 | 0.8571 | 0.8571 | 0.8571 | 0.8571 | 0.8214 |
| **cosine_ndcg@10** | **0.7032** | **0.7277** | **0.7285** | **0.6935** | **0.6316** |
| cosine_mrr@10 | 0.6512 | 0.6849 | 0.6857 | 0.6396 | 0.5696 |
| cosine_map@100 | 0.6553 | 0.6886 | 0.6893 | 0.6425 | 0.5757 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 278 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 278 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 26.06 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 512 tokens</li><li>mean: 512.0 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>How does ByzCoin ensure that microblock chains remain consistent even in the presence of keyblock conflicts?</code> | <code>paper-title: Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing<br><br>Figure 3: ByzCoin blockchain: Two parallel chains store information about the leaders (keyblocks) and the transactions (microblocks)\\<br>becomes two separate parallel blockchains, as shown in Fig. 3. The main blockchain is the keyblock chain, consisting of all mined blocks. The microblock chain is a secondary blockchain that depends on the primary to identify the era in which every microblock belongs to, i.e., which miners are authoritative to sign it and who is the leader of the era.<br><br>Microblocks. A microblock is a simple block that the current consensus group produces every few seconds to represent newly-committed transactions. Each microblock includes a set of transactions and a collective signature. Each microblock also includes hashes referring to the previous microblock and keyblock: the former to ensure total ordering, and the latter indicating which consensus group window and l...</code> |
| <code>What are the primary ways in which Bitcoin users can be deanonymized, and why is network-layer deanonymization particularly concerning?</code> | <code>paper-title: Bitcoin and Cryptocurrency Technologies<br><br>This is is exactly what the Fistful of Bitcoins researchers (and others since) have done. They bought a variety of things, joined mining pools, used Bitcoin exchanges, wallet services, and gambling sites, and interacted in a variety of other ways with service providers, compromising 344 transactions in all.<br><br>In Figure 6.5, we again show the clusters of Figure 6.4, but this times with the labels attached. While our guesses about Mt. gox and Satoshi Dice were correct, the researchers were able to identify numerous other service providers that would have been hard to identify without transacting with them.\\<br>\includegraphics[max width=\textwidth, center]{2025_01_02_05ab7f20e06e1a41e145g-175}<br><br>Figure 6.5. Labeled clusters. By transacting with various Bitcoin service providers, Meiklejohn et al. were able to attach real world identities to their clusters.<br><br>Identifying individuals. The next question is: can we do the same thing for indivi...</code> |
| <code>What is the main purpose of the ledger indistinguishability and transaction non-malleability properties in the Zerocash protocol?</code> | <code>paper-title: Zerocash: Decentralized Anonymous Payments from Bitcoin<br><br>Ledger indistinguishability is formalized by an experiment L-IND that proceeds as follows. First, a challenger samples a random bit $b$ and initializes two DAP scheme oracles $\mathcal{O}_{0}^{\text {DAP }}$ and $\mathcal{O}_{1}^{\text {DAP }}$, maintaining ledgers $L_{0}$ and $L_{1}$. Throughout, the challenger allows $\mathcal{A}$ to issue queries to $\mathcal{O}_{0}^{\text {DAP }}$ and $\mathcal{O}_{1}^{\text {DAP }}$, thus controlling the behavior of honest parties on $L_{0}$ and $L_{1}$. The challenger provides the adversary with the view of both ledgers, but in randomized order: $L_{\text {Left }}:=L_{b}$ and $L_{\text {Right }}:=L_{1-b}$. The adversary's goal is to distinguish whether the view he sees corresponds to $\left(L_{\text {Left }}, L_{\text {Right }}\right)=\left(L_{0}, L_{1}\right)$, i.e. $b=0$, or to $\left(L_{\text {Left }}, L_{\text {Right }}\right)=\left(L_{1}, L_{0}\right)$, i.e. $b=1$.<br><br>At eac...</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-----:|:----:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 1.0 | 1 | 0.6975 | 0.6930 | 0.6760 | 0.6960 | 0.6098 |
| 2.0 | 2 | 0.7258 | 0.7082 | 0.7062 | 0.6935 | 0.6231 |
| 3.0 | 3 | 0.7079 | 0.7270 | 0.7067 | 0.6935 | 0.6184 |
| 4.0 | 4 | 0.7032 | 0.7277 | 0.7285 | 0.6935 | 0.6316 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.5.1+cu118
- Accelerate: 1.2.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:278", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "How does Bitcoin's P2P network prevent malicious nodes from flooding the network with invalid blocks or transactions?", "sentences": ["paper-title: The Bitcoin Lightning Network: Scalable Off-Chain Instant Payments\n\n\\subsection*{8.4 Payment Routing}\nIt is theoretically possible to build a route map implicitly from observing 2 -of-2 multisigs on the blockchain to build a routing table. Note, however, this is not feasible with pay-to-script-hash transaction outputs, which can be resolved out-of-band from the bitcoin protocol via a third party routing service. Building a routing table will become necessary for large operators (e.g. BGP, Cjdns). Eventually, with optimizations, the network will look a lot like the correspondent banking network, or Tier-1 ISPs. Similar to how packets still reach their destination on your home network connection, not all participants need to have a full routing table. The core Tier-1 routes can be online all the time - while nodes at the edges, such as average users, would be connected intermittently.\n\nNode discovery can occur along the edges by pre-selecting and offering partial routes to well-known nodes.\n\n\\subsection*{8.5 Fees}\nLightning Network fees, which differ from blockchain fees, are paid directly between participants within the channel. The fees pay for the time-value of money for consuming the channel for a determined maximum period of time, and for counterparty risk of non-communication.\n\nCounterparty risk for fees only exist with one's direct channel counterparty. If a node two hops away decides to disconnect and their transaction gets broadcast on the blockchain, one's direct counterparties should not broadcast on the blockchain, but continue to update via novation with a new Commitment Transaction. See the Decrementing Timelocks entry in the HTLC section for more information about counterparty risk.\n\nThe time-value of fees pays for consuming time (e.g. 3 days) and is conceptually equivalent to a gold lease rate without custodial risk; it is the time-value for using up the access to money for a very short duration. Since certain paths may become very profitable in one direction, it is possible for fees to be negative to encourage the channel to be available for those profitable paths.\n\n\\section*{9 Risks}\nThe primary risks relate to timelock expiration. Additionally, for core nodes and possibly some merchants to be able to route funds, the keys must be held online for lower latency. However, end-users and nodes are able to keep their private keys firewalled off in cold storage.\n\n\\subsection*{9.1 Improper Timelocks}\nParticipants must choose timelocks with sufficient amounts of time. If insufficient time is given, it is possible that timelocked transactions believed to be invalid will become valid, enabling coin theft by the counterparty. There is a trade-off between longer timelocks and the time-value of money. When writing wallet and Lightning Network application software, it is necessary to ensure that sufficient time is given and users are able to have their transactions enter into the blockchain when interacting with non-cooperative or malicious channel counterparties.\n\n\\subsection*{9.2 Forced Expiration Spam}\nForced expiration of many transactions may be the greatest systemic risk when using the Lightning Network. If a malicious participant creates many channels and forces them all to expire at once, these may overwhelm block data capacity, forcing expiration and broadcast to the blockchain. The result would be mass spam on the bitcoin network. The spam may delay transactions to the point where other locktimed transactions become valid.\n\nThis may be mitigated by permitting one transaction replacement on all pending transactions. Anti-spam can be used by permitting only one transaction replacement of a higher sequence number by the inverse of an even or odd number. For example, if an odd sequence number was broadcast, permit a replacement to a higher even number only once. Transactions would use the sequence number in an orderly way to replace other transactions. This mitigates the risk assuming honest miners. This attack is extremely high risk, as incorrect broadcast of Commitment Transactions entail a full penalty of all funds in the channel.\n\nAdditionally, one may attempt to steal HTLC transactions by forcing a timeout transaction to go through when it should not. This can be easily mitigated by having each transfer inside the channel be lower than the total transaction fees used. Since transactions are extremely cheap and do not hit the blockchain with cooperative channel counterparties, large transfers of value can be split into many small transfers. This attempt can only work if the blocks are completely full for a long time. While it is possible to mitigate it using a longer HTLC timeout duration, variable block sizes may become common, which may need mitigations.\n\nIf this type of transaction becomes the dominant form of transactions which are included on the blockchain, it may become necessary to increase the block size and run a variable blocksize structure and timestop flags as described in the section below. This can create sufficient penalties and disincentives to be highly unprofitable and unsuccessful for attackers, as attackers lose all their funds from broadcasting the wrong transaction, to the point where it will never occur.", "paper-title: OmniLedger: A Secure, Scale-Out, Decentralized Ledger via Sharding\n\nFig. 11: Bootstrap bandwidth consumption with state blocks.\\\\[0pt]\nto create the UTXO state. For this experiment, we reconstructed Bitcoin's blockchain [5], [41] and created a parallel OmniLedger blockchain with weekly state blocks.\n\nFigure 11 depicts the bandwidth overhead of a validator that did not follow the state for the first 100 days. As we can see, the state block approach is better if the validator is outdated for more than 19 days or 2736 Bitcoin blocks.\n\nThe benefit might not seem substantial for Bitcoin, but in OmniLedger, 2736 blocks are created in less than 8 hours, meaning that for one day-long epochs, the state block approach is significantly better. If a peak throughput is required and 16 MB blocks are deployed, we expect reduced bandwidth consumption close to two orders of magnitude.\n\n\\section*{IX. Related Work}\nThe growing interests in scaling blockchains have produced a number of prominent systems that we compare in Table IV. ByzCoin [32] is a first step to scalable BFT consensus, but cannot scale-out. Elastico is the first open scale-out DL, however, it suffers from performance and security challenges that we have already discussed in Section II. RSCoin [16] proposes sharding as a scalable approach for centrally banked cryptocurrencies. RSCoin relies on a trusted source of randomness for sharding and auditing, making its usage problematic in trustless settings. Furthermore, to validate transactions, each shard has to coordinate with the client and instead of running BFT, RSCoin uses a simple two-phase commit, assuming that safety is preserved if the majority of validators is honest. This\n\nTABLE IV: Comparison of Distributed Ledger Systems\n\n\\begin{center}\n\\begin{tabular}{ccccccc}\n\\hline\nSystem & Scale-Out & \\begin{tabular}{c}\nCross-Shard \\\\\nTransaction Atomicity \\\\\n\\end{tabular} & State Blocks & \\begin{tabular}{c}\nMeasured Scalability \\\\\n(\\# of Validators) \\\\\n\\end{tabular} & \\begin{tabular}{c}\nEstimated \\\\\nTime to Fail \\\\\n\\end{tabular} & \\begin{tabular}{c}\nMeasured \\\\\nLatency \\\\\n\\end{tabular} \\\\\n\\hline\nRSCoin [16] & In Permissioned & Partial & No & 30 & N/A & 1 sec \\\\\nElastico [34] & In PoW & No & No & 1600 & 1 hour & 800 sec \\\\\nByzCoin [32] & No & N/A & No & 1008 & 19 years & 40 sec \\\\\nBitcoin-NG [21] & No & N/A & No & 1000 & N/A & 600 sec \\\\\nPBFT [9], [11] & No & N/A & No & 16 & N/A & 1 sec \\\\\nNakamoto [36] & No & N/A & No & 4000 & N/A & 600 sec \\\\\nOmniLedger & Yes & Yes & Yes & 2400 & 68.5 years & 1.5 sec \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\napproach, however, does not protect from double spending attempts by a malicious client colluding with a validator.\n\nIn short, prior solutions [16], [32], [34] achieve only two out of the three desired properties; decentralization, long-term security, and scale-out, as illustrated in Figure 1. OmniLedger overcomes this issue by scaling out, as far as throughput is concerned, and by maintaining consistency to the level required for safety, without imposing a total order.\n\nBitcoin-NG scales Bitcoin without changing the consensus algorithm by observing that the PoW process does not have to be the same as the transaction validation process; this results in two separate timelines: one slow for PoW and one fast for transaction validation. Although Bitcoin-NG significantly increases the throughput of Bitcoin, it is still susceptible to the same attacks as Bitcoin [24], [3].\n\nOther efforts to scale blockchains include: Tendermint [9], a protocol similar to PBFT for shard-level consensus that does not scale due to its similarities to PBFT, and the Lightning Network [40], an off-chain payment protocol for Bitcoin (also compatible to OmniLedger); it limits the amount of information committed to the blockchain.", "Datatype: lecture_note, Title: Lecture 4: Peer to Peer Networking for Blockchains\n\nHow does broadcast take only $O(\\log N)$ steps? We first need to understand the gossip-flooding-based broadcast protocol. The flooding protocol mimics the spread of an epidemic. Once a node is ``infected\", it infects its peers and forever stay's infected. It is easy to see that the spread of information will happen exponentially; hence the information will take $O(\\log N)$ hops to spread to all nodes. To formally understand the spread, we note that $d$-regular graphs with $d\\geq 3$ are an \\textit{expander graph} for large sizes ($|V|$) with high probability. An expander graph is a connected but sparse graph ($|E|=O(|V|)$) with the following property: $|\\partial A| \\geq \\epsilon|A|$ for any connected sub-graph $A$ with $|A|<0.5|V|$. Here, $|\\partial A|$ refers to the number of vertices outside $A$ with at least one neighbor in $A$. A gossip message originates with $A(0)$ as the broadcasting node with $|A(0)|=1$, in the next hop, it will spread to $\\partial A(0)$ with $|A(1)|\\geq (1+\\epsilon)|A(0)|$. This recursion continues and we have $|A(k)|\\geq(1+\\epsilon)^kA(0)$. Thus, the number of steps to reach half the number of nodes is logarithmic in the number of nodes. It can be shown that the other half of the nodes can also be covered in $O(\\log N)$ time.\n\n\n%Engineering issues (peer discovery, bootstrap, churn). Implementation connections (to the lab experiment). Validation of tx, blocks. How does that impact networking? What about skipping validation and doing cut-through routing? Compact blocks. (RR)\n\n\\section*{Bitcoin P2P network: A systems view}\nIn Bitcoin, peers connect to each other and communicate using the TCP protocol. The codebase allows for eight outgoing connections and up to 117 incoming connections. The network has a high churn rate (rate at which users enter/leave the system); hence, the node must be ready to connect to new peers. Moreover, to ensure that the peers we are connecting to are chosen randomly, the node keeps a large list of nodes running Bitcoin in the form of their (IP, port) tuple and establishes a connection to one of them randomly when a slot opens up. \n\nHow does a node bootstrap its list of peers? This happens by connecting to a set of DNS seed nodes. The seed nodes are not heavily decentralized; hence completely relying on the peer list provided by them is not advisable. On connecting to the initial set of peers, a node asks its neighbors for their peer list using {\\tt getAddr} and {\\tt Addr} messages. The node keeps refreshing its peer list regularly by exchanging peer lists with its peers. \n\nTransmission of all block and transactions happen through the inventory message {\\tt inv}, on receiving an {\\tt inv} message the node checks if it has the block or the transaction in its local storage. If not, it sends the {\\tt getData} message to fetch those blocks and transactions from the peer. Since block sizes are relatively large, block transmission can optionally happen in 2 stages. On receiving the {\\tt inv} message, the node may ask for headers first using {\\tt getHeaders} and ask for complete blocks only if a header chain is established. This header-first block transmission increases queries but can decrease the net bandwidth usage. It may also prevent nodes from accepting PoW invalid blocks since the node can check from the header whether PoW is valid. \n\nWe saw in the previous lecture that some nodes might be malicious. A question that may arise is: what stops malicious nodes from flooding the network with invalid blocks and transactions (i.e., with invalid PoW and/or signatures)? Such flooding will saturate the network and increase transmission delay to unacceptable levels. Such an attack is prevented by a simple design decision, forward message to peers only after validating the message; i.e., a node sends an {\\tt inv} block message to its peers only after validating the block. If the adversary creates an invalid block, the block will not be propagated beyond one honest node. Additionally, nodes maintain their peers' reputation using some predefined heuristics; if a peer misbehaves (say by sending a transaction with invalid signatures), its reputation is downgraded and after a certain lower threshold is disconnected."]}, {"source_sentence": "How does the blockchain protocol ensure that all honest players converge on the same chain?", "sentences": ["paper-title: Blockchain CAP Theorem Allows User-Dependent Adaptivity and Finality\n\nDefinition 3 (Potential starting value for period $p$ ). A value $v$ that has been next-voted by $t+1$ honest nodes for period $p-1$.\n\nDefinition 4 (Committed value for period $p$ ). A value $v$ that has been cert-voted by $2 t+1$ nodes for period $p$.\n\nDefinition 5 (Potentially committed value for period $p$ ). A value $v$ that has been cert-voted by $t+1$ honest nodes for period $p$.\n\nAlthough we slightly altered Algorand BA protocol (which is highlighted in red in Appendix A), we note that our modification does not break the safety of the protocol or cause any deadlock in Lemma 1 and Lemma 2, At a high level, the validity check only causes less soft-votes from honest nodes, which is indistinguishable with the case where the leader is malicious and no value receives at least $2 t+1$ soft-votes in some period. Therefore, the safety and deadlock-free property remain.\n\nLemma 1 (Asynchronous Safety, CP0). Even when the network is partitioned, the protocol ensures safety of the system so that no two honest nodes will finish one iteration of the protocol with different outputs.\n\nProof. The following properties hold even during a network partition.\n\n\\begin{itemize}\n \\item By quorum intersection, as each honest node only soft-votes one value, then at most one value is committed or potentially committed for each period $p$ in one iteration.\n \\item If a value $v$ is potentially committed for period $p$, then only $v$ can receive $2 t+1$ next-votes for period $p$. Thus, the unique potential starting value for period $p+1$ is $v$.\n \\item If a period $p$ has a unique potential starting value $v \\neq \\perp$, then only $v$ can be committed for period $p$. Moreover, honest nodes will only next-vote $v$ for period $p$, so the unique potential starting value for period $p+1$ is also $v$. Inductively, any future periods $p^{\\prime}>p$ can only have $v$ as a potential starting value. Thus, once a value is potentially committed, it becomes the unique value that can be committed or potentially committed for any future period, and no two honest nodes will finish this iteration of the protocol with different outputs.\n\\end{itemize}\n\nLemma 2 (Asynchronous Deadlock-freedom). As long as messages will be delivered eventually, an honest node can always leave period p, either by entering a higher period or meeting the halting condition for the current iteration.\n\nProof. We first prove that there can never exist $2 t+1$ next-votes for two different non- $\\perp$ values from the same period $p$ by induction.\n\nStart with $p=1$. Note that every honest node sets $s t_{i}^{1}=\\perp$ and at most one value (say $v$ ) could receive more than $2 t+1$ soft-votes. Therefore only value $v$ and $\\perp$ could potentially receive more than $2 t+1$ next-votes in period 1 . Note that it is possible that both $v$ and $\\perp$ receive more than $2 t+1$ next-votes: all the honest nodes could next-vote for $\\perp$ in Step 4 and then next-vote for $v$ in Step 5 after seeing the $2 t+1$ soft-votes for $v$.\n\nAssume that the claim holds for period $p-1(p \\geq 2)$ : there exist at most two values each of which has $2 t+1$ next-votes for period $p-1$, and one of them is necessarily $\\perp$. Then there are three possible cases:", "paper-title: A Scalable Proof-of-Stake Blockchain in the Open Setting * \\\\ (or, How to Mimic Nakamoto's Design via Proof-of-Stake)\n\nCommon prefix. Our analysis is based on the common prefix analysis of core-chain. The core-chain can achieve common prefix as we discussed. The opportunity for malicious players to destroy common prefix probability is to generate different blockchain for the same core-chain. For the malicious players can sign different blocks for one block-core, this will allow him to fork the blockchain. So the malicious players can fork the blockchain when they are chosen to generate block. However, with the property of hash function, the malicious players can not generate two blocks with same hash value. When an honest player is chosen to extend a block, he will only support one blockchain. Then all of the honest players will converge on one blockchain.\\\\\nCorollary 6.4 (Common prefix). Consider the blockchain protocol $\\Pi^{\\text {main }}$. Consider $\\alpha^{\\star}=\\lambda \\beta^{\\star}$, $\\lambda>1$, and $\\delta>0$. Consider two honest PoS-players, P in round $r$ and $\\mathrm{P}^{\\prime}$ in round $r^{\\prime}$, with the local best PoS blockchains $\\tilde{\\mathcal{C}}, \\tilde{\\mathcal{C}}^{\\prime}$, respectively, where $r^{\\prime} \\geq r$. Then we have $\\operatorname{Pr}\\left[\\tilde{\\mathcal{C}}[1, \\ell] \\preceq \\tilde{\\mathcal{C}}^{\\prime}\\right] \\geq 1-e^{-\\Omega(\\kappa)}$, where $\\ell=\\operatorname{len}(\\mathcal{C})-\\Theta(\\kappa)$.\n\nProof. As we discussed, $\\tilde{\\mathcal{C}}$ and $\\tilde{\\mathcal{C}}^{\\prime}$ are associated with core-chains $\\mathcal{C}$ and $\\mathcal{C}^{\\prime}$ respectively. From Corollary 5.6 we know that $\\operatorname{Pr}\\left[\\mathcal{C}[1, \\ell] \\preceq \\mathcal{C}^{\\prime}\\right] \\geq 1-e^{-\\Omega(\\kappa)}$.\n\nBased on the assumption that $\\alpha^{\\star}=\\lambda \\beta^{\\star}$ and $\\lambda>1$, we can have that the malicious players are not able to generate more than $\\Theta(\\kappa)$ blocks before an honest player is chosen to generate block with high probability. All of the honest players will converge on the same chain. Put them together, we have $\\operatorname{Pr}\\left[\\tilde{\\mathcal{C}}[1, \\ell] \\preceq \\tilde{\\mathcal{C}}^{\\prime}\\right] \\geq 1-e^{-\\Omega(\\kappa)}$ where $\\ell=\\operatorname{len}(\\mathcal{C})-\\Theta(\\kappa)$.\n\nChain soundness. A new player will accept a blockchain (in which the corresponding corechain is included). The proof idea for achieving chain soundness property of our blockchain protocol directly follows that for the core-chain protocol. We have the following statement.\\\\\nCorollary 6.5 (Chain soundness). Consider the blockchain protocol $\\Pi^{\\text {main }}$. Consider for every round, $\\alpha=\\lambda \\beta, \\lambda>1$, and $\\delta>0$. There are two honest PoS-players, $\\mathrm{P}^{\\prime}$ and $\\mathrm{P}^{\\prime \\prime}$ in round $r$, with the local best PoS blockchains $\\tilde{\\mathcal{C}}^{\\prime}$ and $\\tilde{\\mathcal{C}}^{\\prime \\prime}$, respectively. Let $\\mathrm{P}^{\\prime}$ be a new player and $\\mathrm{P}^{\\prime \\prime}$ be an existing player in round $r$. Then we have $\\tilde{\\mathcal{C}}^{\\prime}[\\neg \\kappa] \\preceq \\tilde{\\mathcal{C}}^{\\prime \\prime}$ and $\\tilde{\\mathcal{C}}^{\\prime \\prime}[\\neg \\kappa] \\preceq \\tilde{\\mathcal{C}}^{\\prime}$.", "Datatype: lecture_note, Title: Lecture 9: Scaling Latency\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Prism_main.pdf}\n\\end{center}\n\n\\caption{Factorizing the blocks into three types of blocks: proposer blocks, transaction blocks and voter blocks.}\n\\label{fig:prism}\n\n\\end{figure}\n\nJust as in {\\sf Prism 1.0}, the \\textit{proposer} blocktree in {\\sf Prism} anchors the blockchain. Each proposer block contains a list of reference links to \\textit{transaction} blocks that contain transactions, as well as a single reference to a parent proposer block. Honest nodes mine proposer blocks following the longest chain rule in the proposer tree.\nWe define the *level* of a proposer block as its distance from the genesis proposer block, and the *height* of the proposer tree as the maximum level that contains any proposer blocks. To determine the ordering of proposer blocks (and thus transaction blocks and transactions), we elect one \\textit{leader} proposer block from each level. The sequence of leader blocks up to the height of the proposer tree is called the \\textit{leader sequence}, and is determined by the *voter* chains. Note that the leader blocks do not need to follow the chain structure of the proposer blocks because otherwise deadlock may occur if conflicting blocks (i.e., two proposer blocks not on one chain) are determined as leader blocks. \n\nIn {\\sf Prism}, there are $m$ voter chains, where $m \\gg 1$ is a fixed parameter chosen by the system designer. The larger the $m$, the more parallel the voting process and hence the shorter the latency of confirmation. In general $m$ is chosen as large as network bandwidth and memory management issues are manageable. For example, $m=1000$ is chosen in the \\href{https://arxiv.org/pdf/1909.11261.pdf}{full-stack implementation} of Prism. New voter blocks are mined on each voter chain according to the longest chain rule. A voter block votes for a proposer block by containing a reference link to that proposer block, with the requirements that: (1) a vote is valid only if the voter block is in the longest chain of its voter tree; (2) each voter chain votes for one and only one proposer block at each level; (3) each voter block votes for all the proposer levels that have not been voted by its parent. The leader block at each level is the one that has the largest number of votes among all the proposer blocks at the same level (ties can be broken by the hash of the proposer blocks). The elected leader blocks then provide a unique ordering of the transaction blocks to form the final ledger. \n\n{\\sf Prism} also uses cryptographic sortition to prevent the adversary from focusing its mining power on a specific type of blocks or on a specific voter chain. A miner first forms a ``superblock\" containing $m+2$ parts: a transaction block, a proposer block and a voter block on the $i$-th voter tree ($1\\leq i \\leq m$). We say a superblock is successfully mined if \n\\begin{equation}\n Hash({\\sf nonce}, {\\sf superblock}) < T_{\\rm tx} + T_{\\rm prop} + m T_{\\rm v}. \n\\label{eq:sortition}\n\\end{equation}\nFurther, every successfully mined superblock is identified as a transaction block, a proposer block or a voter block based on the hash output: \n\n\n* identify the superblock as a proposer block if the hash output is less than $T_{\\rm prop}$;\n* identify the superblock as a transaction block if the hash output is in the range $[T_{\\rm prop}, T_{\\rm tx} + T_{\\rm prop})$;\n* identify the superblock as a voter block on the $i$-th voter tree ($1\\leq i \\leq m$) if the hash output is in the range $[T_{\\rm tx} + T_{\\rm prop} + (i-1) T_{\\rm v}, T_{\\rm tx} + T_{\\rm prop} + i T_{\\rm v} )$;"]}, {"source_sentence": "What is the role of the 2/3-GHOST function in the GRANDPA finality gadget?", "sentences": ["paper-title: GRANDPA: a Byzantine Finality Gadget\n\n\\subsection*{2.3 Preliminaries}\nNetwork model : We will be using the partially synchronous network model introduced by 7] and in particular the gossip network variant used in [5]. We assume that any message sent or received by an honest participant reaches all honest participants within time $T$, but possibly only after some Global Synchronisation Time GST. Concretely, any message sent or received by some honest participant at time $t$ is received by all honest participants by time GST $+T$ at the latest.\n\nVoters: For each voting step, there is a set of $n$ voters. We will frequently need to assume that for each such step, at most $f<n / 3$ voters are Byzantine. We need $n-f$ of voters to agree on finality. Whether or not block producers ever vote, they will need to be participants who track the state of the protocol.\n\nVotes: A vote is a block hash, together with some metadata such as round number and the type of vote, such as prevote or precommit, all signed with a voter's private key.\n\nRounds: Each participant has their own idea of what is the current round number. Every prevote and precommit has an associated round number. Honest voters only vote once (for each type of vote) in each round and do not vote in earlier rounds after later ones. Participants need to keep track of which block they see as currently being the latest finalised block and an estimate of which block could have been finalised in the last round.\n\nFor block $B$, we write chain $(B)$ for the chain whose head is $B$. The block number, $n(B)$ of a block $B$ is the length of chain $(B)$. For blocks $B^{\\prime}$ and $B$, we say $B$ is later than $B^{\\prime}$ if it has a higher block number. We write $B>B^{\\prime}$ or that $B$ is descendant of $B^{\\prime}$ for $B, B^{\\prime}$ appearing in the same blockchain with $B^{\\prime}$ later i.e. $B^{\\prime} \\in$ chain $(B)$ with $n(B)>n\\left(B^{\\prime}\\right) . B \\geq B^{\\prime}$ and $B \\leq B^{\\prime}$ are similar except allowing $B=B^{\\prime}$. We write $B \\sim B^{\\prime}$ or $B$ and $B^{\\prime}$ are on the same chain if $B<B^{\\prime}, B=B^{\\prime}$ or $B>B^{\\prime}$; and $B \\nsim B^{\\prime}$ or $B$ and $B^{\\prime}$ are not on the same chain if there is no such chain.\n\nBlocks are ordered as a tree with the genesis block as root. So any two blocks have a common ancestor but two blocks not on the same chain do not have a common descendant. A vote $v$ for a block $B$ by a voter $V$ is a message signed by $V$ containing the blockhash of $B$ and meta-information like the round numbers and the type of vote.\n\nA voter equivocates in a set of votes $S$ if they have cast multiple different votes in $S$. We call a set $S$ of votes safe if the number of voters who equivocate in $S$ is at most $f$. We say that $S$ has a supermajority for a block $B$ if the set of voters who either have a vote for blocks $\\geq B$ or equivocate in $S$ has size at least $(n+f+1) / 2$. We count equivocations as votes for everything so that observing a vote is monotonic, meaning that if $S \\subset T$ then if $S$ has a supermajority for $B$ so does $T$, while being able to ignore yet more equivocating votes from an equivocating voter.\n\nFor our finality gadget (GRANDPA) we use the ghost [13] eventual consensus algorithm as $F$. The 2/3-GHOST function $g(S)$ takes a set $S$ of votes and returns the block $B$ with highest block number such that $S$ has a supermajority for $B$. If there is no such block, then it returns 'nil'. Note that, if $S$ is safe, then we can compute $g(S)$ by starting at the genesis block and iteratively looking for a child of our current block with a supermajority, which must be unique if it exists. Thus we have:\n\nLemma 2.5. Let $T$ be a safe set of votes. Then", "paper-title: Zexe: Enabling Decentralized Private Computation\n\nIn sum, proofs of predicates' satisfiability are produced via a SNARK over $E_{\\text {BLS }}$, and proofs for the NP relation $\\mathcal{R}_{\\mathrm{e}}$ are produced via a zkSNARK over $E_{\\mathrm{CP}}$. The matching fields between the two curves ensure that the former proofs can be efficiently verified.\n\nProblem 3: Cocks-Pinch curves are costly. While the curve $E_{\\mathrm{CP}}$ was chosen to facilitate efficient checking of proofs over $E_{\\mathrm{BLS}}$, the curve $E_{\\mathrm{CP}}$ is at least $2 \\times$ more expensive (in time and space) than $E_{\\mathrm{BLS}}$ simply because $E_{\\mathrm{CP}}$ 's base field has about twice as many bits as $E_{\\mathrm{BLS}}$ 's base field. Checks in the NP relation $\\mathcal{R}_{\\mathrm{e}}$\\\\\nthat are not directly related to proof checking are now unnecessarily carried over a less efficient curve.\\\\\nSolution 3: split relations across two curves. We split $\\mathcal{R}_{\\mathrm{e}}$ into two NP relations $\\mathcal{R}_{\\mathrm{BLS}}$ and $\\mathcal{R}_{\\mathrm{CP}}$ (see Fig. 14), with the latter containing just the proof check and the former containing all other checks. We can then use a zkSNARK over the curve $E_{\\text {BLS }}$ (an efficient curve) to produce proofs for $\\mathcal{R}_{\\mathrm{BLS}}$, and a zkSNARK over $E_{\\mathrm{CP}}$ (the less efficient curve) to produce proofs for $\\mathcal{R}_{\\mathrm{CP}}$. This approach significantly reduces the running time of DPC.Execute (producing proofs for the checks in $\\mathcal{R}_{\\mathrm{BLS}}$ is more efficient over $E_{\\mathrm{BLS}}$ than over $E_{\\mathrm{CP}}$ ), at the expense of a modest increase in transaction size (a transaction now includes a zkSNARK proof over $E_{\\mathrm{BLS}}$ in addition to a proof over $E_{\\mathrm{CP}}$ ). An important technicality that must be addressed is that the foregoing split relies on certain secret information to be shared across the NP relations, namely, the identities of relevant predicates and the local data. We can store this information in suitable commitments that are part of the NP instances for the two NP relations (doing this efficiently requires some care as we discuss below).", "paper-title: Ouroboros Praos: An adaptively-secure, semi-synchronous proof-of-stake blockchain\n\nwhere $\\alpha_{\\mathcal{H}}$ denotes the total relative stake of the honest parties. Note that this bound applies to all static adversaries $\\mathcal{A}$ that corrupt no more than a $1-\\alpha_{\\mathcal{H}}$ fraction of all stake. With this in mind, we define the dominant distribution as follows.\\\\\nDefinition 13 (The dominant distribution $\\mathcal{D}_{\\alpha}^{f}$ ). For two parameters $f$ and $\\alpha$, define $\\mathcal{D}_{\\alpha}^{f}$ to be the distribution on strings $w \\in\\{0,1, \\perp\\}^{R}$ that independently assigns each $w_{i}$ so that\n\n\n\\begin{align*}\np_{\\perp} \\triangleq \\operatorname{Pr}\\left[w_{i}\\right. & =\\perp]=1-f, \\\\\np_{0} \\triangleq \\operatorname{Pr}\\left[w_{i}\\right. & =0]=\\phi(\\alpha) \\cdot(1-f), \\quad \\text { and } \\tag{9}\\\\\np_{1} \\triangleq \\operatorname{Pr}\\left[w_{i}\\right. & =1]=1-p_{\\perp}-p_{0} .\n\\end{align*}\n\n\nThe distribution $\\mathcal{D}_{\\alpha}^{f}$ \"dominates\" $\\mathcal{D}_{\\mathcal{Z}, \\mathcal{A}}^{f}$ for any static adversary $\\mathcal{A}$ that corrupts no more than a relative $1-\\alpha$ share of the total stake, in the sense that nonempty slots are more likely to be tainted under $\\mathcal{D}_{\\alpha}^{f}$ than they are under $\\mathcal{D}_{\\mathcal{Z}, \\mathcal{A}}^{f}$.\n\nTo make this relationship precise, we introduce the partial order $\\preceq$ on the set $\\{\\perp, 0,1\\}$ so that $x \\preceq y$ if and only if $x=y$ or $y=1$. We extend this partial order to $\\{\\perp, 0,1\\}^{R}$ by declaring $x_{1} \\ldots x_{R} \\preceq y_{1} \\ldots y_{R}$ if and only if $x_{i} \\preceq y_{i}$ for each $i$. Intuitively, the relationship $x \\prec y$ asserts that $y$ is \"more adversarial than\" $x$; concretely, any legal fork for $x$ is also a legal fork for $y$. Finally, we define a notion of stochastic dominance for distributions on characteristic strings, and $\\alpha$-dominated adversaries.\n\nDefinition 14 (Stochastic dominance). We say that a subset $E \\subseteq\\{\\perp, 0,1\\}^{R}$ is monotone if $x \\in E$ and $x \\preceq y$ implies that $y \\in E$. Let $\\mathcal{D}$ and $\\mathcal{D}^{\\prime}$ be two distributions on the set of characteristic strings $\\{\\perp, 0,1\\}^{R}$. Then we say that $\\mathcal{D}^{\\prime}$ dominates $\\mathcal{D}$, written $\\mathcal{D} \\preceq \\mathcal{D}^{\\prime}$, if $\\operatorname{Pr}{ }_{\\mathcal{D}}[E] \\leq \\operatorname{Pr}_{\\mathcal{D}^{\\prime}}[E]$ for every monotone set $E$. An adversary $\\mathcal{A}$ is called $\\alpha$-dominated if the distribution $\\mathcal{D}_{\\mathcal{Z}, \\mathcal{A}}^{f}$ that it induces on the set of characteristic strings satisfies $\\mathcal{D}_{\\mathcal{Z}, \\mathcal{A}}^{f} \\preceq \\mathcal{D}_{\\alpha}^{f}$.\n\nAs noted above, this notion of stochastic dominance is consistent with the chain-theoretic definitions of interest, in the sense that failures of the abstract chain properties form monotone events. We record this in the lemma below."]}, {"source_sentence": "What does the paper conclude about the relationship between latency and security in the Nakamoto Consensus protocol?", "sentences": ["paper-title: Close Latency-Security Trade-off for the Nakamoto Consensus\n\nEvidently, if the infinite sums in (2) and (10) are replaced by partial sums for numerical evaluation, the resulting (tighter) security level remains unachievable.\n\n\\subsection*{3.1 Remarks}\nTheorems 3.5 and 3.6 assume the delay $\\Delta>0$. The bounds therein still apply if we set $\\Delta=0$, but are slightly looser than the bounds in Theorems 3.3 and 3.4 for the zero-delay case.\n\nIt is important to include the time of interest $s$ in Definitions 3.1 and 3.2. The \"bad events\" for security breach depend on $s$ as well as the latency $t$. These well-defined events are concerned with block mining times, not how blocks form blockchains. ${ }^{3}$\n\nWe note that a number of previous analyses on the Nakamoto consensus assume a finite lifespan of the protocol [1, 10], that is, a maximum round number is defined, at which round the protocol terminates. The probability of consistency depends on the maximum round number. In contrast, this paper does not assume a finite lifespan. Theorem 3.5 states that, barring a small probability event, confirmed blocks remain permanently in all miners' longest blockchains into the arbitrary future.\n\nEven though we provide the same security guarantee for every blockchain after the confirmation latency $t$, no one can simultaneously guarantee the same for all blocks that will ever be confirmed.\n\n\\footnotetext{${ }^{3}$ To be rigorous, we do not make claims such as \"the blockchain/protocol/system satisfies consistency or liveness properties with probability ...\" because those properties themselves are not events in the probability space defined here.\n}\n\\includegraphics[max width=\\textwidth, center]{2025_01_02_447c9a776bd74bcc1f99g-04}\n\nFigure 1: Bitcoin's latency-security trade-off with $\\alpha+\\beta=$ $1 / 600$ blocks per second and $\\Delta=10$ seconds.\n\nThis is a simple consequence of Murphy's Law: If an adversary keeps trying new episodes of attacks, with probability 1 a bad event will eventually occur to revert some confirmed honest blocks.\n\nFor technical convenience, we regard a block in a miner's longest blockchain to be confirmed after a certain amount of time elapses since the block is mined or enters the miner's view. Nakamoto [22] originally proposed confirming a block after it is sufficiently deep in an honest miner's longest blockchain. We believe both confirmation rules are easy to use in practice. And the two confirmation rules imply each other in probability (see Appendix A for further discussion).\n\n\\subsection*{3.2 Numerical Examples}\nThe latency-security trade-off under several different sets of parameters is plotted in Figure 1. The mining rate is set to Bitcoin's one block per 600 seconds, or $\\alpha+\\beta=1 / 600$ blocks/second. The propagation delay bound is assumed to be $\\Delta=10$ seconds. The latency upper and lower bounds are computed using Theorems 3.5 and 3.6, respectively. In Figure 1, all bounds appear to be exponential for all but very small latency and high error probabilities. This implies the exponential bound (7) is a good approximation of (5) in Theorem 3.5 for the typical range of parameters of interest here.\n\nIt is instructive to examine concrete data points in Figure 1: If the adversarial share of the total network mining rate is $10 \\%$ $(\\alpha: \\beta=9: 1)$, then a confirmation time of four hours is sufficient to achieve $10^{-3}$ security level, and a ten-hour confirmation achieves $10^{-9}$ security level. These results are about two hours away from the corresponding lower bounds. Also, for every additional hour of latency, the security improves by a factor of approximately 20 . If the adversarial share of the mining rate increases to $25 \\%(\\alpha: \\beta=3: 1)$, then 10 hours 40 minutes and 28 hours 45 minutes of confirmation times achieve $10^{-3}$ and $10^{-9}$ security levels, respectively, and the gap between the upper and lower bounds is between five and seven hours. In general, the gap is proportionally insignificant at high security levels but can be otherwise at low security levels. For given mining rates, the gaps are similar at different security levels. This indicates the lower bound (10) is also approximately exponential with a slightly steeper exponent than that of the upper bound.", "paper-title: Ledger Combiners for Fast Settlement\n\n$$\n\\begin{aligned}\n\\delta\\left(\\operatorname{PoW}_{p}^{m}(x), \\mathrm{IPoW}_{p / m}^{m}(x)\\right) & =\\frac{1}{2} \\sum_{s \\in\\{0,1\\}^{m}}\\left|\\operatorname{Pr}\\left[\\operatorname{PoW}_{p}^{m}(x)=s\\right]-\\operatorname{Pr}\\left[\\operatorname{IPoW}_{p / m}^{m}(x)=s\\right]\\right| \\\\\n& =\\sum_{\\substack{s \\in\\{0,1)^{m} \\\\\n\\mathrm{hw}(s)=1}}\\left(\\operatorname{Pr}\\left[\\operatorname{PoW}_{p}^{m}(x)=s\\right]-\\operatorname{Pr}\\left[\\operatorname{IPoW}_{p / m}^{m}(x)=s\\right]\\right) \\\\\n& \\leq m \\cdot\\left[\\frac{p}{m}-\\frac{p}{m}\\left(1-\\frac{p}{m}\\right)^{m-1}\\right] \\leq p[1-(1-p)]=p^{2}\n\\end{aligned}\n$$\n\nas desired, where the last inequality follows by Bernoulli inequality.\n\nThe above lemma already justifies the use of $\\mathrm{PoW}_{p}^{m}$ for achieving subindependence in practical scenarios. To observe this, note that the use of $\\mathrm{IPoW}_{p / m}^{m}$ would lead to full independence of the individual PoW lotteries, and by Lemma 7 the real execution with $\\mathrm{PoW}_{p}^{m}$ will only differ from this ideal behavior with probability at most $Q \\cdot p^{2}$, where $Q$ is the total number of PoW-queries. With current values of $p \\approx 10^{-22}$ in e.g., Bitcoin ${ }^{2}$, and the block creation time adjusting to 10 minutes, this difference would manifest on expectation in about $10^{18}$ years. Note that any future increase of the total mining difficulty while maintaining the block creation time would only increase this period.\n\nNonetheless, in Appendix F we give a more detailed analysis of $\\mathrm{PoW}_{p}^{m}$ that shows that, loosely speaking, $m$ parallel executions of Bitcoin using PoW ${ }_{p}^{m}$ as their shared PoW oracle achieve $\\varepsilon$-subindependence for $\\varepsilon$ negligible in the security parameter.\n\n\\subsection*{4.2 Realizing Rank via Timestamped Blockchains}\nAn important consideration when deploying our virtual ledger construction over existing blockchains is how to realize the notion of rank. We note that typical Nakamoto-style PoS blockchains (e.g., the Ouroboros family, Snow White) assume a common notion of time among the participants and explicitly label blocks with slot numbers with a direct correspondence to absolute time. These slot numbers (or, preferably, a notion of common time associated with each slot number) directly afford a notion of rank that provides the desired persistence and liveness guarantees. To formalize this property, we introduce the notion of a timestamped blockchain.\n\nDefinition 11. A timestamped blockchain is one satisfying the following conventions:\n\n\\begin{itemize}\n \\item Block timestamps. Every block contains a declared timestamp.\n \\item Monotonicity. In order for a block to be considered valid, its timestamp can be no less than the timestamps of all prior blocks in the blockchain. (Thus valid blockchains consist of blocks in monotonically increasing order.)\n\\end{itemize}\n\nInformally, we say that an algorithm is a timestamped blockchain algorithm if it calls for participants to broadcast timestamped blockchains and to \"respect timestamps.\" More specifically, the algorithm satisfies the following:\n\n\\begin{itemize}\n \\item Faithful honest timestamping. Honest participants always post blocks with timestamps determined by their local clocks.\n \\item Ignore future blocks. Honest participants ignore blocks that contain a timestamp which is greater than their local time by more than a fixed constant. (These blocks might be considered later when the local clock of the participant \"catches up\" with the timestamp.)\n\\end{itemize}", "paper-title: A Scalable Proof-of-Stake Blockchain in the Open Setting * \\\\ (or, How to Mimic Nakamoto's Design via Proof-of-Stake)\n\nLet $\\ell$ be the length of core-chain $\\mathcal{C}$. In our design, only the elected PoS-players are allowed to generate new block-cores (to extend the core-chain). Now, each registered PoS-player P will work on the right \"context\" which consists of the latest block-core in the longest corechain and the current time; formally context $:=\\left\\langle h^{\\text {prev }}\\right.$, round $\\rangle$ where $\\mathcal{C}[\\ell]$ is the latest blockcore in the longest core-chain $\\mathcal{C}$, and $h^{\\text {prev }}$ is the identity returned by the functionality $\\mathcal{F}_{\\text {rCERT }}$ for $\\mathcal{C}[\\ell]$, and round denotes the current time. The PoS-player P may query $\\mathcal{F}_{\\text {rCERT }}$ by command (Elect, P , context, $\\mathcal{C}$ ) to see if he is selected to extend $\\mathcal{C}$. If the PoS-player P is selected (with certain probability $p$ ), he would receive a message (Elected, $\\mathrm{P}, h, \\sigma, \\mathrm{~b}$ ) from $\\mathcal{F}_{\\text {rCERT }}$ such that $\\mathrm{b}=1$. Once receiving the signature $\\sigma$ from the functionality, the PoS-player P defines a new block-core $B:=\\left\\langle\\left\\langle h^{\\text {prev }}, h\\right.\\right.$, round $\\left.\\rangle, \\mathrm{P}, \\sigma\\right\\rangle$, updates his local core-chain $\\mathcal{C}$ and then broadcasts the local core-chain to the network. Please refer to Figure 3 for more details of our core-chain protocol.\n\nNote that here PoS-players have access to the functionality $\\mathcal{F}_{\\text {rCERT }}$. The players need to register to the functionality $\\mathcal{F}_{\\text {rCERT }}$ before querying the functionality.\n\nThe best core-chain strategy. Our proof-of-stake core-chain protocol $\\Pi^{\\text {core }}$ uses the subroutine BestCore to single out the best valid core-chain from a set of core-chains. Now we describe the rules of selecting the best core-chain. Roughly speaking, a core-chain is the best one if it is the current longest valid core-chain. The BestCore subroutine takes as input, a core-chain set $\\mathbb{C}^{\\prime}$ and the current time information round'. Intuitively, the subroutine validates all $\\mathcal{C} \\in \\mathbb{C}^{\\prime}$, then finds the valid longest core-chain.\n\nIn more detail, BestCore proceeds as follows. On input the current set of core-chains $\\mathbb{C}^{\\prime}$ and the current time information round', and for each core-chain $\\mathcal{C}$, the subroutine then evaluates every block-core of the core-chain $\\mathcal{C}$ sequentially. Let $\\ell$ be the length of $\\mathcal{C}$. Starting from the head of $\\mathcal{C}$, for every block-core $\\mathcal{C}[i]$, for all $i \\in[\\ell]$, in the core-chain $\\mathcal{C}$, the BestCore subroutine (1) ensures that $\\mathcal{C}[i]$ is linked to the previous block-core $\\mathcal{C}[i-1]$ correctly, and (2) tests if the\n\n\\section*{Protocol $\\Pi^{\\text {core }}$}\nInitially, a set $\\mathcal{P}_{0}$ of players are registered to the functionality $\\mathcal{F}_{\\text {rCERT }}$, where $\\mathcal{P}_{0} \\subseteq \\mathcal{P}$. Initially, for each $\\mathrm{P} \\in \\mathcal{P}$, set $\\mathcal{C}:=\\emptyset$, and state $:=\\emptyset$.\n\nUpon receiving message (Input-Stake, P ) from the environment $z$ at round round, the PoS-player $\\mathrm{P} \\in$ $\\mathcal{P}$, with local state state, proceeds as follows.\n\n\\begin{enumerate}\n \\item Select the best local PoS core-chain:\n\\end{enumerate}"]}, {"source_sentence": "What is the difference between absolute settlement and relative settlement for transactions in a ledger?", "sentences": ["paper-title: Ledger Combiners for Fast Settlement\n\nSince the above requirements are formulated independently for each $t$, it is well-defined to treat $\\mathrm{C}[\\cdot]$ as operating on ledgers rather than dynamic ledgers; we sometimes overload the notation in this sense.\n\nLooking ahead, our amplification combiner will consider $\\mathrm{t}_{\\mathrm{C}}\\left(\\mathbf{L}_{1}^{(t)}, \\ldots, \\mathbf{L}_{m}^{(t)}\\right)=\\bigcup_{i} \\mathbf{L}_{i}^{(t)}$ along with two related definitions of $\\mathrm{a}_{\\mathrm{C}}$ :\n\n$$\n\\mathrm{a}_{\\mathrm{C}}\\left(A_{1}^{(t)}, \\ldots, A_{m}^{(t)}\\right)=\\bigcup_{i} A_{i}^{(t)} \\quad \\text { and } \\quad \\mathrm{a}_{\\mathrm{C}}\\left(A_{1}^{(t)}, \\ldots, A_{m}^{(t)}\\right)=\\bigcap_{i} A_{i}^{(t)}\n$$\n\nsee Section 3. The robust combiner will adopt a more sophisticated notion of $t_{c}$; see Section 5 . In each of these cases, the important structural properties of the construction are captured by the rank function $r_{C}$.\n\n\\subsection*{2.3 Transaction Validity and Settlement}\nIn the discussion below, we assume a general notion of transaction validity that can be decided inductively: given a ledger $\\mathbf{L}$, the validity of a transaction $t x \\in \\mathbf{L}$ is determined by the transactions in the state $\\mathbf{L}\\lceil\\operatorname{tx}\\rceil$ of $\\mathbf{L}$ up to tx and their ordering. Intuitively, only valid transactions are then accounted for when interpreting the state of the ledger on the application level. The canonical example of such a validity predicate in the case of so-called UTXO transactions is formalized for completeness in Appendix B. Note that protocols such as Bitcoin allow only valid transactions to enter the ledger; as the Bitcoin ledger is represented by a simple chain it is possible to evaluate the validity predicate upon block creation for each included transaction. This may not be the case for more general ledgers, such as the result of applying one of our combiners or various DAG-based constructions.\n\nWhile we focus our analysis on persistence and liveness as given in Definition 3, our broader goal is to study settlement. Intuitively, settlement is the delay necessary to ensure that a transaction included in some $A^{(t)}$ enters the dynamic ledger and, furthermore, that its validity stabilizes for all future times.\n\nDefinition 5 (Absolute settlement). For a dynamic ledger $\\mathbf{D} \\stackrel{\\text { def }}{=} \\mathbf{L}^{(0)}, \\mathbf{L}^{(1)}, \\ldots$ we say that a transaction $t x \\in$ $A^{(\\tau)} \\cap \\mathbf{L}^{(t)}($ for $\\tau \\leq t)$ is (absolutely) settled at time $t$ iffor all $\\ell \\geq t$ we have: (i) $\\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil \\subseteq \\mathbf{L}^{(\\ell)}$, (ii) the linear orders $<_{\\mathbf{L}^{(t)}}$ and $<_{\\mathbf{L}^{(t)}}$ agree on $\\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil$, and (iii) for any $\\mathrm{tx}^{\\prime} \\in \\mathbf{L}^{(e)}$ such that $\\mathrm{tx}^{\\prime}{<_{\\mathbf{L}}(t)} \\mathrm{tx}$ we have $\\mathrm{tx}^{\\prime} \\in \\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil$.\n\nNote that for any absolutely settled transaction, its validity is determined and it is guaranteed to remain unchanged in the future.\n\nIt will be useful to also consider a weaker notion of relative settlement of a transaction: Intuitively, tx is relatively settled at time $t$ if we have the guarantee that no (conflicting) transaction $\\mathrm{tx}^{\\prime}$ that is not part of the ledger at time $t$ can possibly eventually precede $t x$ in the ledger ordering.", "paper-title: Casper the Friendly Finality Gadget\n\n\\documentclass[10pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage[version=4]{mhchem}\n\\usepackage{stmaryrd}\n\\usepackage{graphicx}\n\\usepackage[export]{adjustbox}\n\\graphicspath{ {./images/} }\n\\usepackage{hyperref}\n\\hypersetup{colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan,}\n\\urlstyle{same}\n\n\\title{Casper the Friendly Finality Gadget }\n\n\\author{Vitalik Buterin and Virgil Griffith\\\\\nEthereum Foundation}\n\\date{}\n\n\n%New command to display footnote whose markers will always be hidden\n\\let\\svthefootnote\\thefootnote\n\\newcommand\\blfootnotetext[1]{%\n \\let\\thefootnote\\relax\\footnote{#1}%\n \\addtocounter{footnote}{-1}%\n \\let\\thefootnote\\svthefootnote%\n}\n\n%Overriding the \\footnotetext command to hide the marker if its value is `0`\n\\let\\svfootnotetext\\footnotetext\n\\renewcommand\\footnotetext[2][?]{%\n \\if\\relax#1\\relax%\n \\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else%\n \\if?#1\\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else\\svfootnotetext[#1]{#2}\\fi%\n \\fi\n}\n\n\\begin{document}\n\\maketitle\n\n\n\\begin{abstract}\nWe introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions.\n\\end{abstract}\n\n\\section*{1. Introduction}\nOver the past few years there has been considerable research into \"proof of stake\" (PoS) based blockchain consensus algorithms. In a PoS system, a blockchain appends and agrees on new blocks through a process where anyone who holds coins inside of the system can participate, and the influence an agent has is proportional to the number of coins (or \"stake\") it holds. This is a vastly more efficient alternative to proof of work (PoW) \"mining\" and enables blockchains to operate without mining's high hardware and electricity costs.\\\\[0pt]\nThere are two major schools of thought in PoS design. The first, chain-based proof of stake[1, 2], mimics proof of work mechanics and features a chain of blocks and simulates mining by pseudorandomly assigning the right to create new blocks to stakeholders. This includes Peercoin[3], Blackcoin[4], and Iddo Bentov's work[5].\\\\[0pt]\nThe other school, Byzantine fault tolerant (BFT) based proof of stake, is based on a thirty-year-old body of research into BFT consensus algorithms such as PBFT[6]. BFT algorithms typically have proven mathematical properties; for example, one can usually mathematically prove that as long as $>\\frac{2}{3}$ of protocol participants are following the protocol honestly, then, regardless of network latency, the algorithm cannot finalize conflicting blocks. Repurposing BFT algorithms for proof of stake was first introduced by Tendermint[7], and has modern inspirations such as [8]. Casper follows this BFT tradition, though with some modifications.\n\n\\subsection*{1.1. Our Work}\nCasper the Friendly Finality Gadget is an overlay atop a proposal mechanism-a mechanism which proposes blocks ${ }^{1}$. Casper is responsible for finalizing these blocks, essentially selecting a unique chain which represents the canonical transactions of the ledger. Casper provides safety, but liveness depends on the chosen proposal mechanism. That is, if attackers wholly control the proposal mechanism, Casper protects against finalizing two conflicting checkpoints, but the attackers could prevent Casper from finalizing any future checkpoints.\\\\\nCasper introduces several new features that BFT algorithms do not necessarily support:", "paper-title: Bitcoin and Cryptocurrency Technologies\n\nInterestingly, these concerns have an analogy in the realm of voting. It's illegal in the United States and many other nations for individuals to sell their vote. Arguably participating in a pool controlled by someone else is akin to selling your vote in the Bitcoin consensus protocol.\n\nTechnical requirements for pools. Recall that mining pools appear to be an emergent phenomenon. There's no evidence that Satoshi was thinking of mining pools at the time of Bitcoin's original design. It wasn't apparent for a few years that efficient pools could be run between many individuals who don't know or trust each other.\n\nAs we saw in Chapter 5, mining pools typically work by designating a pool operator with a well-known public key. Each of the participating miners mines as usual but sends in shares to the pool operator. These shares are \"near misses\" or \"partial solutions\" which would be valid solutions at a lower difficulty level. This shows the pool operator how much work the miner is performing. Whenever one of the pool participants finds a valid block, the pool operator then distributes the rewards amongst the pool participants based on the number of shares they have submitted. As we discussed in Chapter 5, there are many formulas for dividing the revenue up, but all mining pools follow this basic structure.\n\nThe existence of pools thus relies on at least two technical properties of Bitcoin. The first is that it's easy for a miner to prove (probabilistically) how much work they are doing by submitting shares. By choosing a low enough threshold for shares, miners can easily prove how much work they are performing with arbitrary precision regardless of the actual difficulty of finding an valid block. This facet of mining puzzles appears difficult to change, given that we need a puzzle that can be created with arbitrary difficulty.\n\nSecond, pool members can easily prove to the pool operator that they're following the rules and working to find valid blocks which would reward the pool as a whole. This works because the pool's public key is committed to in the coinbase transaction included in the block's Merkle tree of transactions. Once a miner finds a block or even a share, they can't change which public key is the recipient of the newly minted coins.\n\nBlock discarding attacks. There is one weakness in this scheme for implementing mining pools: there is nothing to to enforce that participating miners actually submit valid blocks to the pool manager in the event that they find them. Suppose that there's a pool member that's upset with a large mining pool. They can participate in the pool by mining and submitting shares just like normal, but in the event that they actually find a valid block that would reward the pool they simply discard it and don't tell the pool operator about it.\n\nThis attack reduces the pool's overall mining power as none of the attacker's work is contributing towards finding valid blocks. However the attacker will still be rewarded as they appear to be submitting valid shares and simply getting unlucky to not find any valid blocks. If the mining pool is designed to be revenue-neutral (that is, all mining rewards are redistributed back to participants) then this attack can cause the pool to run at a loss.\n\nThis attack is sometimes called a vigilante or sabotage attack and is considered a form of vandalism because the attack appears to be costly for both the attacker and the pool. The attacker loses money because every block they discard would have led to some proportion of the block rewards being returned to them. Of course, the attacker still gets rewards for other puzzle solutions that are found.\n\nIt appears that a rational attacker wouldn't employ this strategy, since they would lose money without gaining anything tangible. It turns out (quite surprisingly) that there are cases where this strategy can be profitable, as discussed in the box below. But in any case, we want to design an entirely new mining puzzle formulation that ensures this strategy is always profitable.\n\nSidebar: block discarding attacks between pools. People assumed for years that it can't be profitable for a participant to discard valid blocks found on behalf of the pool. It turns out this strategy can be profitable if one mining pool uses it to attack another. This was proposed apocryphally many times and first thoroughly analyzed in a paper by Ittay Eyal in 2015.\n\nLet's consider a simple case: suppose two mining pools, $A$ and $B$, each have $50 \\%$ of the total mining capacity. Now suppose B uses half of its mining power ( $25 \\%$ of the total capacity) to mine as a member in pool A, but discards all blocks found. We can show, in a simplified model, that B will now earns $5 / 9$ of the total rewards, greater than the $50 \\%$ it would earn by mining normally. In this simple case, dedicating half of its mining power to attacking can be shown to be the optimal strategy for pool B."]}], "model-index": [{"name": "SentenceTransformer based on BAAI/bge-base-en-v1.5", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7857142857142857, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8571428571428571, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8571428571428571, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.26190476190476186, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17142857142857146, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08571428571428573, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7857142857142857, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8571428571428571, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8571428571428571, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7032219246239031, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6511904761904762, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6553083095766022, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5714285714285714, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7857142857142857, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8214285714285714, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8571428571428571, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5714285714285714, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.26190476190476186, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1642857142857143, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08571428571428573, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5714285714285714, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7857142857142857, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8214285714285714, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8571428571428571, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7276726753008987, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6848639455782314, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6886316064887493, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5714285714285714, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7857142857142857, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8214285714285714, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8571428571428571, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5714285714285714, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.26190476190476186, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1642857142857143, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08571428571428573, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5714285714285714, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7857142857142857, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8214285714285714, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8571428571428571, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7284895986499949, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6857142857142858, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6893267651888342, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.75, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8214285714285714, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8571428571428571, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.24999999999999997, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1642857142857143, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08571428571428573, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.75, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8214285714285714, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8571428571428571, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.6935204558400861, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6395833333333334, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6425405844155845, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.42857142857142855, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.6785714285714286, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.75, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8214285714285714, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.42857142857142855, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.22619047619047614, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.15000000000000005, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08214285714285716, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.42857142857142855, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.6785714285714286, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.75, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8214285714285714, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.631592589549331, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.5696428571428572, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.5757306413556414, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,795 |
RichardErkhov/gplsi_-_Aitana-6.3B-4bits
|
RichardErkhov
| null |
[
"safetensors",
"bloom",
"4-bit",
"bitsandbytes",
"region:us"
] | 2025-03-09T07:59:59Z |
2025-03-09T08:01:59+00:00
| 2 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Aitana-6.3B - bnb 4bits
- Model creator: https://huggingface.co/gplsi/
- Original model: https://huggingface.co/gplsi/Aitana-6.3B/
Original model description:
---
license: apache-2.0
language:
- ca
- va
tags:
- FLOR
- Bloom
- Aitana
- Catalan
- Valencian
pipeline_tag: text-generation
---
# AITANA-6.3B
<img src="https://hf.fast360.xyz/production/uploads/639873bb315923c0d5b4c883/6EPbzDJbYtyX_oS15K6jF.png" width="50%" height="50%"/>
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [Demo](#demo)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
</details>
## Model description
**AITANA-6.3B** is a text generation model for causal language modeling with a decoder-only architecture.
It has been trained from continuous pre-training based on [FLOR-6.3B](https://huggingface.co/projecte-aina/FLOR-6.3B), with emphasis on data (listed below)
in **Valencian** (similar to Catalan) language. Concretely, a total of 1.304 million tokens per epoch in this first version of the model and two epochs over the data. The **Political and Administrative domains** are highly represented in this model's version.
This model is based on FLOR-6.3B as the basis for training and uses the same tokenizer.
## Intended uses and limitations
As **FLOR-6.3B**, **AITANA-6.3B** is a base model that can be used for causal language modeling, it can be used as is for text generation,
although **fine/instruction-tuning on specific tasks is recommended for its final use**.
This language model has been trained with data in a formal register, namely related to the
administrative and political domain, so it is expected that using it in text-generation tasks
will produce text in this same format.
## Demo
In the following link, you can access an interactive demo to test the text generation in the language model:
Demo link(https://llm-aitana.gplsi.es/)
In the demo, you can adjust the number of words generated as well as the decoding technique to be used by
the model (top p, top k) and other parameters such as temperature.
## How to use
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
input_text = "Les corts valencianes han pres la decisió de"
model_id = "gplsi/Aitana-6.3B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
generation = generator(
input_text,
do_sample=True,
top_k=10,
eos_token_id=tokenizer.eos_token_id,
)
print(f"Result: {generation[0]['generated_text']}")
```
## Training
### Training data
The training corpus has been obtained using web scraping on public data from different sources such as the
[Official Gazette of the University of Alicante (BOUA)](https://www.boua.ua.es/ca), [the Official Gazette of the Generalitat Valenciana (DOGV)](https://dogv.gva.es/va) and accurate data provided by
[the Valencian Courts (DSCV and DSCCV)](https://www.cortsvalencianes.es/ca-va/). Giving a total of 1.304 million tokens, according to the following table.
Dataset | Language | Words (per-epoch) | Epochs | Total Tokens |
|---------------------|----------|--------------------|--------------|--------------|
DSCV | va | 31.98M | 2 | 57.05M |
DSCCV | va | 45.59M | 2 | 80.91M |
BOUA | va | 11.65M | 2 | 29.02M |
DOGV | va | 301.59M | 2 | 982.33M |
DOGCV | va | 54.92M | 2 | 154.32M |
Several of the downloaded sources have already been used in the FLOR-6.3B training, so the date of data collection for the previous
model has been taken into account and those web pages have been scraped from that date.
Information on the datasets used for training is shown below:
- BOUA: Official Bulletin of the University of Alicante. In this case, we are dealing with documents issued by the University of Alicante in Valencian about grants, calls issued by the university, regulations, resolutions of laws that affect the university environment, and corrections of errors of these same documents issued previously.
- DOGV: Official Journal of the Generalitat Valenciana. This dataset contains official communiqués of different kinds issued by the Generalitat Valenciana, with data entirely in Valencian. It mainly talks about measures taken in the legal field, approval of laws, and public sector communiqués. In this case, we have 18 different documents covering communiqués from 1998 to 2018 and three more recent documents with data from 2019 to 2023.
- DOGCV: in this case, it is the Official Journal of the Generalitat Valenciana, but only the historical documents from 1980 to 1997.
- DSCV: Journal of the Valencian Parliament. This dataset contains transcriptions of the different interventions made during the plenary sessions in the Valencian Parliament by the different participants. It covers data from 2001 to 1999 up to 2022, each transcript comprises a .html file.
- DSCCV: this is a dataset of the Valencian Parliament diary, centered on transcriptions of the different commissions held. As in the previous case, it is separated into one file for each transcription.
### Training parameters
During the training of the model, a high context window was desired when generating text, so it was decided to use an input size of 2048
tokens and a minimum context window of 512 in case of truncating the input sequences. 80% of the data obtained was used for the training stage,
while 20% was used during the evaluation stage. A summary of the parameters used during training can be seen in the following table:
Parameter | Value |
|---------------------|---|
Epochs | 1 |
Learning Rate | 2e-5 |
Warmup Steps | 0 |
Precision | bf-16 |
Weight decay | 1e-1 |
Training Fraction | 0.8 |
Evaluation Fraction | 0.2 |
Input size (tokens) | 2048 |
Minimum context window (tokens) | 512 |
Training time (hours/epoch) | 40 |
### Devices
A total of 4 A100 graphics cards with a maximum capacity of 40 GB each were used to train the model. This meant a training time of approximately
40 hours per epoch. Using a mini-batch size of size 2 and a batch size of size 32 to calculate backpropagation.
### Distributed Training Strategy
A distributed training strategy called Fully Sharded Data Parallel ([FSDP](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html))
has been used. With this, the entire model has been loaded among the 4 A100s available for training with a mini-batch size of size 2 as
previously discussed.
### Languages
In addition to the data already used for the training of FLOR-6.3B, data completely in **Valencian** from the sources mentioned in
the previous section has been used.
## Evaluation
The model has been evaluated using the loss function and perplexity during the training stage and these metrics have also been
obtained during the evaluation stage. Due to the low amount of data, it was decided to evaluate
at the end of each epoch.
| Epoch | Mode | Loss | Perplexity |
|--------------|------------|----------|-------------|
| 1 | Training | 0.6944 | 2.111 |
| 1 | Evaluation | 0.247 | 1.28 |
| 2 | Training | 0.5335 | 1.705 |
| 2 | Evaluation | 0.4004 | 1.007 |
| 3 | Training | 0.4768 | 1.611 |
| 3 | Evaluation | 0.9141 | 1.007 |
| 4 | Training | 0.4586 | 1.582 |
| 4 | Evaluation | 0.125 | 1.007 |
### Results
In the following table, we can see the results obtained with different benchmarks in comparison with
the model used for continuous pre-training. The results have been obtained from the model pre-trained;
no instruction tuning or fine-tuning of any kind has been performed.
| Dataset | Lang. | Task | Metric | Aitana-6.3B | Flor-6.3B |
|------------------------------|--------|---------------------------|---------|-------------|-------------|
| Belebele Cat_latn | ca | Reading Comprehension | acc | **24.33** | 21.89 |
| CATCOLA | ca | Linguistic Acceptability | mcc | -0.04 | **0.04** |
| COPA | ca | Commonsense Reasoning | acc | 75.6 | **76.8** |
| XStoryCloze | ca | Commonsense Reasoning | f1 | **72.14** | 70.88 |
| OpenBookQA | ca | Question Answering | acc | **33.4** | **33.4** |
| Parafraseja | ca | Paraphrasing | acc | 61.7 | **62.38** |
| PAWS-X | ca | Paraphrasing | acc | 58.55 | **60.75** |
| PiQA | ca | Question Answering | acc | 69.8 | **70.51** |
| SiQA | ca | Question Answering | acc | 45.91 | **47.34** |
| ARC Easy | ca | Question Answering | acc | **63.93** | 59.68 |
| ARC Challenge | ca | Question Answering | acc | 33.45 | **33.53** |
| XQuAD | ca | Question Answering | f1 | 59.36 | **59.74** |
| COQCAT | ca | Question Answering | f1 | 63.42 | **66.2** |
| CatalanQA | ca | Question Answering | f1 | 71.42 | **73.24** |
| XNLI | ca | Natural Language Inference| acc | 48.8 | **50.24** |
| Teca | ca | Natural Language Inference| acc | 46.62 | **49.79** |
| WNLI | ca | Natural Language Inference| acc | **57.75** | 54.93 |
| caBreu Extractive | ca | Summarization | rouge1 | **50.94** | 36.21 |
| caBreu Abstractive | ca | Summarization | bleu | 5.27 | **7.11** |
| caBreu Extreme | ca | Summarization | bleu | 1.72 | **4.4** |
| Mgsm direct | ca | Math |exact match | **0.03** | 0 |
| VeritasQA Gen | ca | Truthfulness | bleu | 4.18 | **21.56**|
| VeritasQA MC1 | ca | Truthfulness | acc | **23.18** | 22.35 |
| VeritasQA MC2 | ca | Truthfulness | acc | 34.95 | **35.19**|
| Phrases ca-va | ca/va| Translation - Adaptation | bleu | 89.12 | **90.3** |
| Phrases va-ca | ca/va| Translation - Adaptation | bleu | **93.23** | **92.99**|
| Belebele Cat_latn | es | Reading Comprehension | acc | **25.56** | 22.33 |
| PAWS | es | Paraphrasing | acc | 56.5 | **57.5** |
| Escola | es | Paraphrasing | acc | **0.02** | 0 |
| XStoryCloze | es | Commonsense Reasoning | f1 | 68.46 | **69.76** |
| XQuAD | es | Question Answering | f1 | 58.85 | **63.59** |
| XLSum | es | Summarization | bleu | 0.88 | **1.79** |
| MGSM Direct | es | Math |exact match | **0.02** | 0 |
| VeritasQA Gen | es | Truthfulness | bleu | 13.57 | **22.11**|
| VeritasQA MC1 | es | Truthfulness | acc | **23.46** | 21.51 |
| VeritasQA MC2 | es | Truthfulness | acc | **37.52** | 34.74|
| XNLI | es | Natural Language Inference| acc | 46.67 | **47.87**|
| WNLI | es | Natural Language Inference| acc | 53.52 | **56.34** |
| Phrases es-va | es/va| Translation | bleu | 70.28 | **70.52**|
| Phrases va-es | va/es| Translation | bleu | 79.63 | **79.87**|
## Additional information
### Author
Language and Information System Group [GPLSI](https://gplsi.dlsi.ua.es/)
### Contact
For further information, please send an email to [GPLSI](https://gplsi.dlsi.ua.es/)
### Copyright
Copyright(c) 2024 by GPLSI(https://gplsi.dlsi.ua.es/).
### License
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by [ILENIA](https://proyectoilenia.es/)-[VIVES](https://vives.gplsi.es/) project <<2022/TL22/00215334>>
### Disclaimer
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it) or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (GPLSI) be liable for any results arising from the use made by third parties.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Aitana-6.3B - bnb 4bits
- Model creator: https://huggingface.co/gplsi/
- Original model: https://huggingface.co/gplsi/Aitana-6.3B/
Original model description:
---
license: apache-2.0
language:
- ca
- va
tags:
- FLOR
- Bloom
- Aitana
- Catalan
- Valencian
pipeline_tag: text-generation
---
# AITANA-6.3B
<img src="https://hf.fast360.xyz/production/uploads/639873bb315923c0d5b4c883/6EPbzDJbYtyX_oS15K6jF.png" width="50%" height="50%"/>
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [Demo](#demo)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
</details>
## Model description
**AITANA-6.3B** is a text generation model for causal language modeling with a decoder-only architecture.
It has been trained from continuous pre-training based on [FLOR-6.3B](https://huggingface.co/projecte-aina/FLOR-6.3B), with emphasis on data (listed below)
in **Valencian** (similar to Catalan) language. Concretely, a total of 1.304 million tokens per epoch in this first version of the model and two epochs over the data. The **Political and Administrative domains** are highly represented in this model's version.
This model is based on FLOR-6.3B as the basis for training and uses the same tokenizer.
## Intended uses and limitations
As **FLOR-6.3B**, **AITANA-6.3B** is a base model that can be used for causal language modeling, it can be used as is for text generation,
although **fine/instruction-tuning on specific tasks is recommended for its final use**.
This language model has been trained with data in a formal register, namely related to the
administrative and political domain, so it is expected that using it in text-generation tasks
will produce text in this same format.
## Demo
In the following link, you can access an interactive demo to test the text generation in the language model:
Demo link(https://llm-aitana.gplsi.es/)
In the demo, you can adjust the number of words generated as well as the decoding technique to be used by
the model (top p, top k) and other parameters such as temperature.
## How to use
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
input_text = "Les corts valencianes han pres la decisió de"
model_id = "gplsi/Aitana-6.3B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
generation = generator(
input_text,
do_sample=True,
top_k=10,
eos_token_id=tokenizer.eos_token_id,
)
print(f"Result: {generation[0]['generated_text']}")
```
## Training
### Training data
The training corpus has been obtained using web scraping on public data from different sources such as the
[Official Gazette of the University of Alicante (BOUA)](https://www.boua.ua.es/ca), [the Official Gazette of the Generalitat Valenciana (DOGV)](https://dogv.gva.es/va) and accurate data provided by
[the Valencian Courts (DSCV and DSCCV)](https://www.cortsvalencianes.es/ca-va/). Giving a total of 1.304 million tokens, according to the following table.
Dataset | Language | Words (per-epoch) | Epochs | Total Tokens |
|---------------------|----------|--------------------|--------------|--------------|
DSCV | va | 31.98M | 2 | 57.05M |
DSCCV | va | 45.59M | 2 | 80.91M |
BOUA | va | 11.65M | 2 | 29.02M |
DOGV | va | 301.59M | 2 | 982.33M |
DOGCV | va | 54.92M | 2 | 154.32M |
Several of the downloaded sources have already been used in the FLOR-6.3B training, so the date of data collection for the previous
model has been taken into account and those web pages have been scraped from that date.
Information on the datasets used for training is shown below:
- BOUA: Official Bulletin of the University of Alicante. In this case, we are dealing with documents issued by the University of Alicante in Valencian about grants, calls issued by the university, regulations, resolutions of laws that affect the university environment, and corrections of errors of these same documents issued previously.
- DOGV: Official Journal of the Generalitat Valenciana. This dataset contains official communiqués of different kinds issued by the Generalitat Valenciana, with data entirely in Valencian. It mainly talks about measures taken in the legal field, approval of laws, and public sector communiqués. In this case, we have 18 different documents covering communiqués from 1998 to 2018 and three more recent documents with data from 2019 to 2023.
- DOGCV: in this case, it is the Official Journal of the Generalitat Valenciana, but only the historical documents from 1980 to 1997.
- DSCV: Journal of the Valencian Parliament. This dataset contains transcriptions of the different interventions made during the plenary sessions in the Valencian Parliament by the different participants. It covers data from 2001 to 1999 up to 2022, each transcript comprises a .html file.
- DSCCV: this is a dataset of the Valencian Parliament diary, centered on transcriptions of the different commissions held. As in the previous case, it is separated into one file for each transcription.
### Training parameters
During the training of the model, a high context window was desired when generating text, so it was decided to use an input size of 2048
tokens and a minimum context window of 512 in case of truncating the input sequences. 80% of the data obtained was used for the training stage,
while 20% was used during the evaluation stage. A summary of the parameters used during training can be seen in the following table:
Parameter | Value |
|---------------------|---|
Epochs | 1 |
Learning Rate | 2e-5 |
Warmup Steps | 0 |
Precision | bf-16 |
Weight decay | 1e-1 |
Training Fraction | 0.8 |
Evaluation Fraction | 0.2 |
Input size (tokens) | 2048 |
Minimum context window (tokens) | 512 |
Training time (hours/epoch) | 40 |
### Devices
A total of 4 A100 graphics cards with a maximum capacity of 40 GB each were used to train the model. This meant a training time of approximately
40 hours per epoch. Using a mini-batch size of size 2 and a batch size of size 32 to calculate backpropagation.
### Distributed Training Strategy
A distributed training strategy called Fully Sharded Data Parallel ([FSDP](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html))
has been used. With this, the entire model has been loaded among the 4 A100s available for training with a mini-batch size of size 2 as
previously discussed.
### Languages
In addition to the data already used for the training of FLOR-6.3B, data completely in **Valencian** from the sources mentioned in
the previous section has been used.
## Evaluation
The model has been evaluated using the loss function and perplexity during the training stage and these metrics have also been
obtained during the evaluation stage. Due to the low amount of data, it was decided to evaluate
at the end of each epoch.
| Epoch | Mode | Loss | Perplexity |
|--------------|------------|----------|-------------|
| 1 | Training | 0.6944 | 2.111 |
| 1 | Evaluation | 0.247 | 1.28 |
| 2 | Training | 0.5335 | 1.705 |
| 2 | Evaluation | 0.4004 | 1.007 |
| 3 | Training | 0.4768 | 1.611 |
| 3 | Evaluation | 0.9141 | 1.007 |
| 4 | Training | 0.4586 | 1.582 |
| 4 | Evaluation | 0.125 | 1.007 |
### Results
In the following table, we can see the results obtained with different benchmarks in comparison with
the model used for continuous pre-training. The results have been obtained from the model pre-trained;
no instruction tuning or fine-tuning of any kind has been performed.
| Dataset | Lang. | Task | Metric | Aitana-6.3B | Flor-6.3B |
|------------------------------|--------|---------------------------|---------|-------------|-------------|
| Belebele Cat_latn | ca | Reading Comprehension | acc | **24.33** | 21.89 |
| CATCOLA | ca | Linguistic Acceptability | mcc | -0.04 | **0.04** |
| COPA | ca | Commonsense Reasoning | acc | 75.6 | **76.8** |
| XStoryCloze | ca | Commonsense Reasoning | f1 | **72.14** | 70.88 |
| OpenBookQA | ca | Question Answering | acc | **33.4** | **33.4** |
| Parafraseja | ca | Paraphrasing | acc | 61.7 | **62.38** |
| PAWS-X | ca | Paraphrasing | acc | 58.55 | **60.75** |
| PiQA | ca | Question Answering | acc | 69.8 | **70.51** |
| SiQA | ca | Question Answering | acc | 45.91 | **47.34** |
| ARC Easy | ca | Question Answering | acc | **63.93** | 59.68 |
| ARC Challenge | ca | Question Answering | acc | 33.45 | **33.53** |
| XQuAD | ca | Question Answering | f1 | 59.36 | **59.74** |
| COQCAT | ca | Question Answering | f1 | 63.42 | **66.2** |
| CatalanQA | ca | Question Answering | f1 | 71.42 | **73.24** |
| XNLI | ca | Natural Language Inference| acc | 48.8 | **50.24** |
| Teca | ca | Natural Language Inference| acc | 46.62 | **49.79** |
| WNLI | ca | Natural Language Inference| acc | **57.75** | 54.93 |
| caBreu Extractive | ca | Summarization | rouge1 | **50.94** | 36.21 |
| caBreu Abstractive | ca | Summarization | bleu | 5.27 | **7.11** |
| caBreu Extreme | ca | Summarization | bleu | 1.72 | **4.4** |
| Mgsm direct | ca | Math |exact match | **0.03** | 0 |
| VeritasQA Gen | ca | Truthfulness | bleu | 4.18 | **21.56**|
| VeritasQA MC1 | ca | Truthfulness | acc | **23.18** | 22.35 |
| VeritasQA MC2 | ca | Truthfulness | acc | 34.95 | **35.19**|
| Phrases ca-va | ca/va| Translation - Adaptation | bleu | 89.12 | **90.3** |
| Phrases va-ca | ca/va| Translation - Adaptation | bleu | **93.23** | **92.99**|
| Belebele Cat_latn | es | Reading Comprehension | acc | **25.56** | 22.33 |
| PAWS | es | Paraphrasing | acc | 56.5 | **57.5** |
| Escola | es | Paraphrasing | acc | **0.02** | 0 |
| XStoryCloze | es | Commonsense Reasoning | f1 | 68.46 | **69.76** |
| XQuAD | es | Question Answering | f1 | 58.85 | **63.59** |
| XLSum | es | Summarization | bleu | 0.88 | **1.79** |
| MGSM Direct | es | Math |exact match | **0.02** | 0 |
| VeritasQA Gen | es | Truthfulness | bleu | 13.57 | **22.11**|
| VeritasQA MC1 | es | Truthfulness | acc | **23.46** | 21.51 |
| VeritasQA MC2 | es | Truthfulness | acc | **37.52** | 34.74|
| XNLI | es | Natural Language Inference| acc | 46.67 | **47.87**|
| WNLI | es | Natural Language Inference| acc | 53.52 | **56.34** |
| Phrases es-va | es/va| Translation | bleu | 70.28 | **70.52**|
| Phrases va-es | va/es| Translation | bleu | 79.63 | **79.87**|
## Additional information
### Author
Language and Information System Group [GPLSI](https://gplsi.dlsi.ua.es/)
### Contact
For further information, please send an email to [GPLSI](https://gplsi.dlsi.ua.es/)
### Copyright
Copyright(c) 2024 by GPLSI(https://gplsi.dlsi.ua.es/).
### License
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by [ILENIA](https://proyectoilenia.es/)-[VIVES](https://vives.gplsi.es/) project <<2022/TL22/00215334>>
### Disclaimer
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it) or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (GPLSI) be liable for any results arising from the use made by third parties.
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION",
"PARAPHRASING"
] | 46,796 |
prithivMLmods/Delta-Pavonis-Qwen-14B
|
prithivMLmods
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-03-14T10:04:04Z |
2025-03-27T10:03:15+00:00
| 238 | 3 |
---
base_model:
- prithivMLmods/Calcium-Opus-14B-Elite2-R1
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
- trl
- sft
- Qwen
- Distill
---

# **Delta-Pavonis-Qwen-14B**
> Delta-Pavonis-Qwen-14B is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
## **Key Improvements**
1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries.
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
## **Quickstart with transformers**
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Delta-Pavonis-Qwen-14B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What are the key principles of general-purpose AI?"
messages = [
{"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
1. **General-Purpose Reasoning**:
Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems.
2. **Educational and Informational Assistance**:
Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users.
3. **Conversational AI and Chatbots**:
Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation.
4. **Multilingual Applications**:
Supports global communication, translations, and multilingual content generation.
5. **Structured Data Processing**:
Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation.
6. **Long-Form Content Generation**:
Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs.
## **Limitations**
1. **Hardware Requirements**:
Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
2. **Potential Bias in Responses**:
While designed to be neutral, outputs may still reflect biases present in training data.
3. **Inconsistent Outputs in Creative Tasks**:
May produce variable results in storytelling and highly subjective topics.
4. **Limited Real-World Awareness**:
Does not have access to real-time events beyond its training cutoff.
5. **Error Propagation in Extended Outputs**:
Minor errors in early responses may affect overall coherence in long-form outputs.
6. **Prompt Sensitivity**:
The effectiveness of responses may depend on how well the input prompt is structured.
| null |
Non_BioNLP
|

# **Delta-Pavonis-Qwen-14B**
> Delta-Pavonis-Qwen-14B is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
## **Key Improvements**
1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries.
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
## **Quickstart with transformers**
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Delta-Pavonis-Qwen-14B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What are the key principles of general-purpose AI?"
messages = [
{"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
1. **General-Purpose Reasoning**:
Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems.
2. **Educational and Informational Assistance**:
Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users.
3. **Conversational AI and Chatbots**:
Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation.
4. **Multilingual Applications**:
Supports global communication, translations, and multilingual content generation.
5. **Structured Data Processing**:
Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation.
6. **Long-Form Content Generation**:
Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs.
## **Limitations**
1. **Hardware Requirements**:
Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
2. **Potential Bias in Responses**:
While designed to be neutral, outputs may still reflect biases present in training data.
3. **Inconsistent Outputs in Creative Tasks**:
May produce variable results in storytelling and highly subjective topics.
4. **Limited Real-World Awareness**:
Does not have access to real-time events beyond its training cutoff.
5. **Error Propagation in Extended Outputs**:
Minor errors in early responses may affect overall coherence in long-form outputs.
6. **Prompt Sensitivity**:
The effectiveness of responses may depend on how well the input prompt is structured.
|
{"base_model": ["prithivMLmods/Calcium-Opus-14B-Elite2-R1"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "trl", "sft", "Qwen", "Distill"]}
|
task
|
[
"TRANSLATION"
] | 46,797 |
aroot/mbart-finetuned-eng-kor-22045430821
|
aroot
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-30T17:44:19Z |
2023-06-30T18:00:59+00:00
| 12 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: mbart-finetuned-eng-kor-22045430821
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-22045430821
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1052
- Bleu: 5.7445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-22045430821
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1052
- Bleu: 5.7445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "mbart-finetuned-eng-kor-22045430821", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 46,798 |
TransferGraph/chiragasarpota_scotus-bert-finetuned-lora-ag_news
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:ag_news",
"base_model:chiragasarpota/scotus-bert",
"base_model:adapter:chiragasarpota/scotus-bert",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-27T22:53:36Z |
2024-02-28T00:42:37+00:00
| 0 | 0 |
---
base_model: chiragasarpota/scotus-bert
datasets:
- ag_news
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: chiragasarpota_scotus-bert-finetuned-lora-ag_news
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
args: default
metrics:
- type: accuracy
value: 0.5328947368421053
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chiragasarpota_scotus-bert-finetuned-lora-ag_news
This model is a fine-tuned version of [chiragasarpota/scotus-bert](https://huggingface.co/chiragasarpota/scotus-bert) on the ag_news dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.25 | None | 0 |
| 0.4224 | 1.3217 | 0 |
| 0.4997 | 1.2231 | 1 |
| 0.5276 | 1.1802 | 2 |
| 0.5329 | 1.1677 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chiragasarpota_scotus-bert-finetuned-lora-ag_news
This model is a fine-tuned version of [chiragasarpota/scotus-bert](https://huggingface.co/chiragasarpota/scotus-bert) on the ag_news dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.25 | None | 0 |
| 0.4224 | 1.3217 | 0 |
| 0.4997 | 1.2231 | 1 |
| 0.5276 | 1.1802 | 2 |
| 0.5329 | 1.1677 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "chiragasarpota/scotus-bert", "datasets": ["ag_news"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "chiragasarpota_scotus-bert-finetuned-lora-ag_news", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ag_news", "type": "ag_news", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.5328947368421053, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,799 |
Cran-May/tempemotacilla-eridanus-0302
|
Cran-May
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"trl",
"r999",
"conversational",
"en",
"zh",
"base_model:prithivMLmods/Pegasus-Opus-14B-Exp",
"base_model:finetune:prithivMLmods/Pegasus-Opus-14B-Exp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-03-02T04:21:04Z |
2025-03-02T04:21:05+00:00
| 25 | 0 |
---
base_model:
- prithivMLmods/Pegasus-Opus-14B-Exp
language:
- en
- zh
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
- trl
- r999
model-index:
- name: Eridanus-Opus-14B-r999
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 63.86
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 51.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 38.6
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.24
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.48
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.46
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
---

# **Eridanus-Opus-14B-r999**
Eridanus-Opus-14B-r999 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
## **Key Improvements**
1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries.
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
## **Quickstart with transformers**
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Eridanus-Opus-14B-r999"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What are the key principles of general-purpose AI?"
messages = [
{"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
1. **General-Purpose Reasoning**:
Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems.
2. **Educational and Informational Assistance**:
Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users.
3. **Conversational AI and Chatbots**:
Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation.
4. **Multilingual Applications**:
Supports global communication, translations, and multilingual content generation.
5. **Structured Data Processing**:
Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation.
6. **Long-Form Content Generation**:
Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs.
## **Limitations**
1. **Hardware Requirements**:
Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
2. **Potential Bias in Responses**:
While designed to be neutral, outputs may still reflect biases present in training data.
3. **Inconsistent Outputs in Creative Tasks**:
May produce variable results in storytelling and highly subjective topics.
4. **Limited Real-World Awareness**:
Does not have access to real-time events beyond its training cutoff.
5. **Error Propagation in Extended Outputs**:
Minor errors in early responses may affect overall coherence in long-form outputs.
6. **Prompt Sensitivity**:
The effectiveness of responses may depend on how well the input prompt is structured.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Eridanus-Opus-14B-r999-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FEridanus-Opus-14B-r999&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 40.11|
|IFEval (0-Shot) | 63.86|
|BBH (3-Shot) | 51.04|
|MATH Lvl 5 (4-Shot)| 38.60|
|GPQA (0-shot) | 19.24|
|MuSR (0-shot) | 19.48|
|MMLU-PRO (5-shot) | 48.46|
| null |
Non_BioNLP
|

# **Eridanus-Opus-14B-r999**
Eridanus-Opus-14B-r999 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
## **Key Improvements**
1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries.
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
## **Quickstart with transformers**
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Eridanus-Opus-14B-r999"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What are the key principles of general-purpose AI?"
messages = [
{"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
1. **General-Purpose Reasoning**:
Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems.
2. **Educational and Informational Assistance**:
Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users.
3. **Conversational AI and Chatbots**:
Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation.
4. **Multilingual Applications**:
Supports global communication, translations, and multilingual content generation.
5. **Structured Data Processing**:
Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation.
6. **Long-Form Content Generation**:
Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs.
## **Limitations**
1. **Hardware Requirements**:
Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
2. **Potential Bias in Responses**:
While designed to be neutral, outputs may still reflect biases present in training data.
3. **Inconsistent Outputs in Creative Tasks**:
May produce variable results in storytelling and highly subjective topics.
4. **Limited Real-World Awareness**:
Does not have access to real-time events beyond its training cutoff.
5. **Error Propagation in Extended Outputs**:
Minor errors in early responses may affect overall coherence in long-form outputs.
6. **Prompt Sensitivity**:
The effectiveness of responses may depend on how well the input prompt is structured.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Eridanus-Opus-14B-r999-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FEridanus-Opus-14B-r999&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 40.11|
|IFEval (0-Shot) | 63.86|
|BBH (3-Shot) | 51.04|
|MATH Lvl 5 (4-Shot)| 38.60|
|GPQA (0-shot) | 19.24|
|MuSR (0-shot) | 19.48|
|MMLU-PRO (5-shot) | 48.46|
|
{"base_model": ["prithivMLmods/Pegasus-Opus-14B-Exp"], "language": ["en", "zh"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "trl", "r999"], "model-index": [{"name": "Eridanus-Opus-14B-r999", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "IFEval (0-Shot)", "type": "wis-k/instruction-following-eval", "split": "train", "args": {"num_few_shot": 0}}, "metrics": [{"type": "inst_level_strict_acc and prompt_level_strict_acc", "value": 63.86, "name": "averaged accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "BBH (3-Shot)", "type": "SaylorTwift/bbh", "split": "test", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc_norm", "value": 51.04, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MATH Lvl 5 (4-Shot)", "type": "lighteval/MATH-Hard", "split": "test", "args": {"num_few_shot": 4}}, "metrics": [{"type": "exact_match", "value": 38.6, "name": "exact match"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GPQA (0-shot)", "type": "Idavidrein/gpqa", "split": "train", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 19.24, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MuSR (0-shot)", "type": "TAUR-Lab/MuSR", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 19.48, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU-PRO (5-shot)", "type": "TIGER-Lab/MMLU-Pro", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 48.46, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999", "name": "Open LLM Leaderboard"}}]}]}
|
task
|
[
"TRANSLATION"
] | 46,800 |
LaTarn/re-clean-setfit-model
|
LaTarn
|
text-classification
|
[
"sentence-transformers",
"safetensors",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-11-03T00:03:46Z |
2023-11-03T00:04:11+00:00
| 46 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# LaTarn/re-clean-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("LaTarn/re-clean-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# LaTarn/re-clean-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("LaTarn/re-clean-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,801 |
nikitakapitan/bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos
|
nikitakapitan
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-02T09:25:26Z |
2023-10-02T10:19:39+00:00
| 15 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.9158064516129032
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7724
- Accuracy: 0.9158
## Model Training Details
| Parameter | Value |
|----------------------|--------------------------------------------------|
| **Task** | text-classification |
| **Teacher Model** | bert-base-uncased-finetuned-clinc_oos |
| **Student Model** | distilbert-base-uncased |
| **Dataset Name** | clinc_oos |
| **Dataset Config** | plus |
| **Evaluation Dataset**| validation |
| **Batch Size** | 48 |
| **Number of Epochs** | 5 |
| **Learning Rate** | 0.00002 |
| **Alpha*** | 1 |
*alpha: (Total_loss = alpha * Loss_CE + (1-alpha) * Loss_KD)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2762 | 0.7284 |
| 3.7824 | 2.0 | 636 | 1.8624 | 0.8358 |
| 3.7824 | 3.0 | 954 | 1.1512 | 0.8984 |
| 1.6858 | 4.0 | 1272 | 0.8540 | 0.9132 |
| 0.8983 | 5.0 | 1590 | 0.7724 | 0.9158 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7724
- Accuracy: 0.9158
## Model Training Details
| Parameter | Value |
|----------------------|--------------------------------------------------|
| **Task** | text-classification |
| **Teacher Model** | bert-base-uncased-finetuned-clinc_oos |
| **Student Model** | distilbert-base-uncased |
| **Dataset Name** | clinc_oos |
| **Dataset Config** | plus |
| **Evaluation Dataset**| validation |
| **Batch Size** | 48 |
| **Number of Epochs** | 5 |
| **Learning Rate** | 0.00002 |
| **Alpha*** | 1 |
*alpha: (Total_loss = alpha * Loss_CE + (1-alpha) * Loss_KD)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2762 | 0.7284 |
| 3.7824 | 2.0 | 636 | 1.8624 | 0.8358 |
| 3.7824 | 3.0 | 954 | 1.1512 | 0.8984 |
| 1.6858 | 4.0 | 1272 | 0.8540 | 0.9132 |
| 0.8983 | 5.0 | 1590 | 0.7724 | 0.9158 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "distilbert-base-uncased", "datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9158064516129032, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,802 |
MultiBertGunjanPatrick/multiberts-seed-1-160k
|
MultiBertGunjanPatrick
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-10-04T04:59:30+00:00
| 102 | 0 |
---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-1
---
# MultiBERTs Seed 1 Checkpoint 160k (uncased)
Seed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-160k')
model = BertModel.from_pretrained("multiberts-seed-1-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| null |
Non_BioNLP
|
# MultiBERTs Seed 1 Checkpoint 160k (uncased)
Seed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-160k')
model = BertModel.from_pretrained("multiberts-seed-1-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,804 |
gaudi/opus-mt-fr-swc-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-25T15:14:55Z |
2024-10-19T04:48:59+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fr-swc)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fr-swc).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-swc --output_dir ./ctranslate2/opus-mt-fr-swc-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fr-swc-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fr-swc-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fr-swc-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fr-swc) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fr-swc)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fr-swc).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-swc --output_dir ./ctranslate2/opus-mt-fr-swc-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fr-swc-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fr-swc-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fr-swc-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fr-swc) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 46,805 |
rezashkv/diffusion_pruning
|
rezashkv
|
text-to-image
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"en",
"arxiv:2406.12042",
"license:mit",
"region:us"
] | 2024-06-13T22:29:44Z |
2024-06-19T03:10:07+00:00
| 0 | 0 |
---
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion
- diffusers
---
# APTP: Adaptive Prompt-Tailored Pruning of T2I Diffusion Models
[](https://arxiv.org/abs/2406.12042)
[](https://github.com/rezashkv/diffusion_pruning)
The implementation of the paper ["Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models"](https://arxiv.org/abs/2406.12042)
## Abstract
Text-to-image (T2I) diffusion models have demonstrated impressive image generation capabilities. Still, their computational intensity prohibits
resource-constrained organizations from deploying T2I models after fine-tuning them on their internal target data. While pruning
techniques offer a potential solution to reduce the computational burden of T2I models, static pruning methods use the same pruned
model for all input prompts, overlooking the varying capacity requirements of different prompts. Dynamic pruning addresses this issue by utilizing
a separate sub-network for each prompt, but it prevents batch parallelism on GPUs. To overcome these limitations, we introduce
Adaptive Prompt-Tailored Pruning (APTP), a novel prompt-based pruning method designed for T2I diffusion models. Central to our approach is a
prompt router model, which learns to determine the required capacity for an input text prompt and routes it to an architecture code, given a
total desired compute budget for prompts. Each architecture code represents a specialized model tailored to the prompts assigned to it, and the
number of codes is a hyperparameter. We train the prompt router and architecture codes using contrastive learning, ensuring that similar prompts
are mapped to nearby codes. Further, we employ optimal transport to prevent the codes from collapsing into a single one. We demonstrate APTP's
effectiveness by pruning Stable Diffusion (SD) V2.1 using CC3M and COCO as target datasets. APTP outperforms the
single-model pruning baselines in terms of FID, CLIP, and CMMD scores. Our analysis of the clusters learned by APTP reveals they
are semantically meaningful. We also show that APTP can automatically discover previously empirically found challenging prompts for SD, e.g., prompts for generating text images, assigning them to higher capacity codes.
<p align="center">
<img src="assets/fig_1.gif" alt="APTP Overview" width="600" />
</p>
<p align="left">
<em>APTP: We prune a text-to-image diffusion model like Stable Diffusion (left) into a mixture of efficient experts (right) in a prompt-based manner. Our prompt router routes distinct types of prompts to different experts, allowing experts' architectures to be separately specialized by removing layers or channels.</em>
</p>
<p align="center">
<img src="assets/fig_2.gif" alt="APTP Pruning Scheme" width="600" />
</p>
<p align="left">
<em>APTP pruning scheme. We train the prompt router and the set of architecture codes to prune a T2I diffusion model into a mixture of experts. The prompt router consists of three modules. We use a Sentence Transformer as the prompt encoder to encode the input prompt into a representation z. Then, the architecture predictor transforms z into the architecture embedding e that has the same dimensionality as architecture codes. Finally, the router routes the embedding e into an architecture code a(i). We use optimal transport to evenly distribute the prompts in a training batch among the architecture codes. The architecture code a(i) = (u(i), v(i)) determines pruning the model’s width and depth. We train the prompt router’s parameters and architecture codes in an end-to-end manner using the denoising objective of the pruned model L<sub>DDPM</sub>, distillation loss between the pruned and original models L<sub>distill</sub>, average resource usage for the samples in the batch R, and contrastive objective L<sub>cont</sub>, encouraging embeddings e preserving semantic similarity of the representations z.</em>
</p>
### Model Description
- **Developed by:** UMD Efficiency Group
- **Model type:** Text-to-Image Diffusion Model
- **Model Description:** APTP is a pruning scheme for text-to-image diffusion models like Stable Diffusion, resulting in a mixture of efficient experts specialized for different prompt types.
### License
APTP is released under the MIT License. Please see the [LICENSE](LICENSE) file for details.
## Training Dataset
We used Conceptual Captions and MS-COCO 2014 datasets for training the models. Details for downloading and preparing these datasets are provided in the [Github Repository](https://github.com/rezashkv/diffusion_pruning).
## File Structure
```
APTP
├── APTP-Base-CC3M
│ ├── arch0
│ ├── ...
│ └── arch15
├── APTP-Small-CC3M
│ ├── arch0
│ ├── ...
│ └── arch7
├── APTP-Base-COCO
│ ├── arch0
│ ├── ...
│ └── arch7
└── APTP-Small-COCO
├── arch0
├── ...
└── arch7
```
## Simple Inference Example
Make sure you follow the [provided instructions](https://github.com/rezashkv/diffusion_pruning?tab=readme-ov-file#installation) to install pdm from source.
```python
from diffusers import StableDiffusionPipeline, PNDMScheduler
from pdm.models import HyperStructure, StructureVectorQuantizer, UNet2DConditionModelPruned
from pdm.utils.data_utils import get_mpnet_embeddings
from transformers import AutoTokenizer, AutoModel
import torch
prompt_encoder_model_name_or_path = "sentence-transformers/all-mpnet-base-v2"
aptp_model_name_or_path = f"rezashkv/APTP"
aptp_variant = "APTP-Base-CC3M"
sd_model_name_or_path = "stabilityai/stable-diffusion-2-1"
prompt_encoder = AutoModel.from_pretrained(prompt_encoder_model_name_or_path)
prompt_encoder_tokenizer = AutoTokenizer.from_pretrained(prompt_encoder_model_name_or_path)
hyper_net = HyperStructure.from_pretrained(aptp_model_name_or_path, subfolder=f"{aptp_variant}/hypernet")
quantizer = StructureVectorQuantizer.from_pretrained(aptp_model_name_or_path, subfolder=f"{aptp_variant}/quantizer")
prompts = ["a woman on a white background looks down and away from the camera the a forlorn look on her face"]
prompt_embedding = get_mpnet_embeddings(prompts, prompt_encoder, prompt_encoder_tokenizer)
arch_embedding = hyper_net(prompt_embedding)
expert_id = quantizer.get_cosine_sim_min_encoding_indices(arch_embedding)[0].item()
unet = UNet2DConditionModelPruned.from_pretrained(aptp_model_name_or_path,
subfolder=f"{aptp_variant}/arch{expert_id}/checkpoint-30000/unet")
noise_scheduler = PNDMScheduler.from_pretrained(sd_model_name_or_path, subfolder="scheduler")
pipeline = StableDiffusionPipeline.from_pretrained(sd_model_name_or_path, unet=unet, scheduler=noise_scheduler)
pipeline.to('cuda')
generator = torch.Generator(device='cuda').manual_seed(43)
image = pipeline(
prompt=prompts[0],
guidance_scale=7.5,
generator=generator,
output_type='pil',
).images[0]
image.save("image.png")
```
## Uses
This model is designed for academic and research purposes, specifically for exploring the efficiency of text-to-image diffusion models through prompt-based pruning. Potential applications include:
1. **Research:** Researchers can use the model to study prompt-based pruning techniques and their impact on the performance and efficiency of text-to-image generation models.
2. **Education:** Educators and students can use this model as a learning tool for understanding advanced concepts in neural network pruning, diffusion models, and prompt engineering.
3. **Benchmarking:** The model can be used for benchmarking against other text-to-image generation models to assess the trade-offs between computational efficiency and output quality.
## Safety
When using these models, it is important to consider the following safety and ethical guidelines:
1. **Content Generation:** The model can generate a wide range of images based on text prompts. Users should ensure that the generated content adheres to ethical guidelines and does not produce harmful, offensive, or inappropriate images.
2. **Bias and Fairness:** Like other AI models, APTP may exhibit biases present in the training data. Users should be aware of these potential biases and take steps to mitigate their impact, particularly when the model is used in sensitive or critical applications.
3. **Data Privacy:** Ensure that any data used with the model complies with data privacy regulations. Avoid using personally identifiable information (PII) or sensitive data without proper consent.
4. **Responsible Use:** Users are encouraged to use the model responsibly, considering the potential social and ethical implications of their work. This includes avoiding the generation of misleading or false information and respecting the rights and dignity of individuals depicted in generated images.
By adhering to these guidelines, users can help ensure the responsible and ethical use of the APTP model.
## Contact
In case of any questions or issues, please contact the authors of the paper:
* [Reza Shirkavand](mailto:[email protected])
* [Alireza Ganjdanesh](mailto:[email protected])
| null |
Non_BioNLP
|
# APTP: Adaptive Prompt-Tailored Pruning of T2I Diffusion Models
[](https://arxiv.org/abs/2406.12042)
[](https://github.com/rezashkv/diffusion_pruning)
The implementation of the paper ["Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models"](https://arxiv.org/abs/2406.12042)
## Abstract
Text-to-image (T2I) diffusion models have demonstrated impressive image generation capabilities. Still, their computational intensity prohibits
resource-constrained organizations from deploying T2I models after fine-tuning them on their internal target data. While pruning
techniques offer a potential solution to reduce the computational burden of T2I models, static pruning methods use the same pruned
model for all input prompts, overlooking the varying capacity requirements of different prompts. Dynamic pruning addresses this issue by utilizing
a separate sub-network for each prompt, but it prevents batch parallelism on GPUs. To overcome these limitations, we introduce
Adaptive Prompt-Tailored Pruning (APTP), a novel prompt-based pruning method designed for T2I diffusion models. Central to our approach is a
prompt router model, which learns to determine the required capacity for an input text prompt and routes it to an architecture code, given a
total desired compute budget for prompts. Each architecture code represents a specialized model tailored to the prompts assigned to it, and the
number of codes is a hyperparameter. We train the prompt router and architecture codes using contrastive learning, ensuring that similar prompts
are mapped to nearby codes. Further, we employ optimal transport to prevent the codes from collapsing into a single one. We demonstrate APTP's
effectiveness by pruning Stable Diffusion (SD) V2.1 using CC3M and COCO as target datasets. APTP outperforms the
single-model pruning baselines in terms of FID, CLIP, and CMMD scores. Our analysis of the clusters learned by APTP reveals they
are semantically meaningful. We also show that APTP can automatically discover previously empirically found challenging prompts for SD, e.g., prompts for generating text images, assigning them to higher capacity codes.
<p align="center">
<img src="assets/fig_1.gif" alt="APTP Overview" width="600" />
</p>
<p align="left">
<em>APTP: We prune a text-to-image diffusion model like Stable Diffusion (left) into a mixture of efficient experts (right) in a prompt-based manner. Our prompt router routes distinct types of prompts to different experts, allowing experts' architectures to be separately specialized by removing layers or channels.</em>
</p>
<p align="center">
<img src="assets/fig_2.gif" alt="APTP Pruning Scheme" width="600" />
</p>
<p align="left">
<em>APTP pruning scheme. We train the prompt router and the set of architecture codes to prune a T2I diffusion model into a mixture of experts. The prompt router consists of three modules. We use a Sentence Transformer as the prompt encoder to encode the input prompt into a representation z. Then, the architecture predictor transforms z into the architecture embedding e that has the same dimensionality as architecture codes. Finally, the router routes the embedding e into an architecture code a(i). We use optimal transport to evenly distribute the prompts in a training batch among the architecture codes. The architecture code a(i) = (u(i), v(i)) determines pruning the model’s width and depth. We train the prompt router’s parameters and architecture codes in an end-to-end manner using the denoising objective of the pruned model L<sub>DDPM</sub>, distillation loss between the pruned and original models L<sub>distill</sub>, average resource usage for the samples in the batch R, and contrastive objective L<sub>cont</sub>, encouraging embeddings e preserving semantic similarity of the representations z.</em>
</p>
### Model Description
- **Developed by:** UMD Efficiency Group
- **Model type:** Text-to-Image Diffusion Model
- **Model Description:** APTP is a pruning scheme for text-to-image diffusion models like Stable Diffusion, resulting in a mixture of efficient experts specialized for different prompt types.
### License
APTP is released under the MIT License. Please see the [LICENSE](LICENSE) file for details.
## Training Dataset
We used Conceptual Captions and MS-COCO 2014 datasets for training the models. Details for downloading and preparing these datasets are provided in the [Github Repository](https://github.com/rezashkv/diffusion_pruning).
## File Structure
```
APTP
├── APTP-Base-CC3M
│ ├── arch0
│ ├── ...
│ └── arch15
├── APTP-Small-CC3M
│ ├── arch0
│ ├── ...
│ └── arch7
├── APTP-Base-COCO
│ ├── arch0
│ ├── ...
│ └── arch7
└── APTP-Small-COCO
├── arch0
├── ...
└── arch7
```
## Simple Inference Example
Make sure you follow the [provided instructions](https://github.com/rezashkv/diffusion_pruning?tab=readme-ov-file#installation) to install pdm from source.
```python
from diffusers import StableDiffusionPipeline, PNDMScheduler
from pdm.models import HyperStructure, StructureVectorQuantizer, UNet2DConditionModelPruned
from pdm.utils.data_utils import get_mpnet_embeddings
from transformers import AutoTokenizer, AutoModel
import torch
prompt_encoder_model_name_or_path = "sentence-transformers/all-mpnet-base-v2"
aptp_model_name_or_path = f"rezashkv/APTP"
aptp_variant = "APTP-Base-CC3M"
sd_model_name_or_path = "stabilityai/stable-diffusion-2-1"
prompt_encoder = AutoModel.from_pretrained(prompt_encoder_model_name_or_path)
prompt_encoder_tokenizer = AutoTokenizer.from_pretrained(prompt_encoder_model_name_or_path)
hyper_net = HyperStructure.from_pretrained(aptp_model_name_or_path, subfolder=f"{aptp_variant}/hypernet")
quantizer = StructureVectorQuantizer.from_pretrained(aptp_model_name_or_path, subfolder=f"{aptp_variant}/quantizer")
prompts = ["a woman on a white background looks down and away from the camera the a forlorn look on her face"]
prompt_embedding = get_mpnet_embeddings(prompts, prompt_encoder, prompt_encoder_tokenizer)
arch_embedding = hyper_net(prompt_embedding)
expert_id = quantizer.get_cosine_sim_min_encoding_indices(arch_embedding)[0].item()
unet = UNet2DConditionModelPruned.from_pretrained(aptp_model_name_or_path,
subfolder=f"{aptp_variant}/arch{expert_id}/checkpoint-30000/unet")
noise_scheduler = PNDMScheduler.from_pretrained(sd_model_name_or_path, subfolder="scheduler")
pipeline = StableDiffusionPipeline.from_pretrained(sd_model_name_or_path, unet=unet, scheduler=noise_scheduler)
pipeline.to('cuda')
generator = torch.Generator(device='cuda').manual_seed(43)
image = pipeline(
prompt=prompts[0],
guidance_scale=7.5,
generator=generator,
output_type='pil',
).images[0]
image.save("image.png")
```
## Uses
This model is designed for academic and research purposes, specifically for exploring the efficiency of text-to-image diffusion models through prompt-based pruning. Potential applications include:
1. **Research:** Researchers can use the model to study prompt-based pruning techniques and their impact on the performance and efficiency of text-to-image generation models.
2. **Education:** Educators and students can use this model as a learning tool for understanding advanced concepts in neural network pruning, diffusion models, and prompt engineering.
3. **Benchmarking:** The model can be used for benchmarking against other text-to-image generation models to assess the trade-offs between computational efficiency and output quality.
## Safety
When using these models, it is important to consider the following safety and ethical guidelines:
1. **Content Generation:** The model can generate a wide range of images based on text prompts. Users should ensure that the generated content adheres to ethical guidelines and does not produce harmful, offensive, or inappropriate images.
2. **Bias and Fairness:** Like other AI models, APTP may exhibit biases present in the training data. Users should be aware of these potential biases and take steps to mitigate their impact, particularly when the model is used in sensitive or critical applications.
3. **Data Privacy:** Ensure that any data used with the model complies with data privacy regulations. Avoid using personally identifiable information (PII) or sensitive data without proper consent.
4. **Responsible Use:** Users are encouraged to use the model responsibly, considering the potential social and ethical implications of their work. This includes avoiding the generation of misleading or false information and respecting the rights and dignity of individuals depicted in generated images.
By adhering to these guidelines, users can help ensure the responsible and ethical use of the APTP model.
## Contact
In case of any questions or issues, please contact the authors of the paper:
* [Reza Shirkavand](mailto:[email protected])
* [Alireza Ganjdanesh](mailto:[email protected])
|
{"language": ["en"], "license": "mit", "tags": ["text-to-image", "stable-diffusion", "diffusers"]}
|
task
|
[
"SEMANTIC_SIMILARITY"
] | 46,806 |
TransferGraph/Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/512seq_len_6ep_bert_ft_cola-91",
"base_model:adapter:Jeevesh8/512seq_len_6ep_bert_ft_cola-91",
"model-index",
"region:us"
] | 2024-02-29T13:41:48Z |
2024-02-29T13:41:51+00:00
| 0 | 0 |
---
base_model: Jeevesh8/512seq_len_6ep_bert_ft_cola-91
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: validation
args: hate
metrics:
- type: accuracy
value: 0.73
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [Jeevesh8/512seq_len_6ep_bert_ft_cola-91](https://huggingface.co/Jeevesh8/512seq_len_6ep_bert_ft_cola-91) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.73
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.45 | None | 0 |
| 0.68 | 0.6743 | 0 |
| 0.723 | 0.5277 | 1 |
| 0.718 | 0.4791 | 2 |
| 0.73 | 0.4581 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [Jeevesh8/512seq_len_6ep_bert_ft_cola-91](https://huggingface.co/Jeevesh8/512seq_len_6ep_bert_ft_cola-91) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.73
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.45 | None | 0 |
| 0.68 | 0.6743 | 0 |
| 0.723 | 0.5277 | 1 |
| 0.718 | 0.4791 | 2 |
| 0.73 | 0.4581 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "Jeevesh8/512seq_len_6ep_bert_ft_cola-91", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "hate", "split": "validation", "args": "hate"}, "metrics": [{"type": "accuracy", "value": 0.73, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,807 |
Language-Media-Lab/mt5-small-ain-jpn-mt
|
Language-Media-Lab
|
translation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"jpn",
"ain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-02-04T13:20:55+00:00
| 119 | 0 |
---
language:
- jpn
- ain
tags:
- translation
---
mt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
| null |
Non_BioNLP
|
mt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
|
{"language": ["jpn", "ain"], "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 46,808 |
TransferGraph/dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:dhimskyy/wiki-bert",
"base_model:adapter:dhimskyy/wiki-bert",
"model-index",
"region:us"
] | 2024-02-29T12:50:31Z |
2024-02-29T12:50:33+00:00
| 0 | 0 |
---
base_model: dhimskyy/wiki-bert
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.43315508021390375
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [dhimskyy/wiki-bert](https://huggingface.co/dhimskyy/wiki-bert) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2353 | None | 0 |
| 0.4251 | 1.2739 | 0 |
| 0.4305 | 1.2626 | 1 |
| 0.4278 | 1.2564 | 2 |
| 0.4332 | 1.2526 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [dhimskyy/wiki-bert](https://huggingface.co/dhimskyy/wiki-bert) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2353 | None | 0 |
| 0.4251 | 1.2739 | 0 |
| 0.4305 | 1.2626 | 1 |
| 0.4278 | 1.2564 | 2 |
| 0.4332 | 1.2526 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "dhimskyy/wiki-bert", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "validation", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.43315508021390375, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,810 |
RichardErkhov/Qwen_-_Qwen2-0.5B-4bits
|
RichardErkhov
| null |
[
"safetensors",
"qwen2",
"4-bit",
"bitsandbytes",
"region:us"
] | 2024-10-30T13:36:21Z |
2024-10-30T13:36:50+00:00
| 4 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-0.5B - bnb 4bits
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2-0.5B/
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
license: apache-2.0
new_version: Qwen/Qwen2.5-0.5B
---
# Qwen2-0.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Performance
The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
The datasets for evaluation include:
**English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
**Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
**Math Tasks**: GSM8K (4-shot), MATH (4-shot)
**Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot)
**Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
#### Qwen2-0.5B & Qwen2-1.5B performances
| Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B |
| :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: |
|#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B |
|MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** |
|MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 |
|Theorem QA | - | - | - |- | 8.9 | **15.0** |
|HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 |
|MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 |
|GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** |
|MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** |
|BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 |
|HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 |
|Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 |
|ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 |
|TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** |
|C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** |
|CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-0.5B - bnb 4bits
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2-0.5B/
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
license: apache-2.0
new_version: Qwen/Qwen2.5-0.5B
---
# Qwen2-0.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Performance
The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
The datasets for evaluation include:
**English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
**Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
**Math Tasks**: GSM8K (4-shot), MATH (4-shot)
**Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot)
**Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
#### Qwen2-0.5B & Qwen2-1.5B performances
| Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B |
| :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: |
|#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B |
|MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** |
|MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 |
|Theorem QA | - | - | - |- | 8.9 | **15.0** |
|HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 |
|MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 |
|GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** |
|MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** |
|BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 |
|HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 |
|Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 |
|ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 |
|TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** |
|C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** |
|CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 46,811 |
TransferGraph/nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:nurkayevaa/autonlp-bert-covid-407910458",
"base_model:adapter:nurkayevaa/autonlp-bert-covid-407910458",
"model-index",
"region:us"
] | 2024-02-29T13:08:52Z |
2024-02-29T13:08:54+00:00
| 0 | 0 |
---
base_model: nurkayevaa/autonlp-bert-covid-407910458
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.707
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [nurkayevaa/autonlp-bert-covid-407910458](https://huggingface.co/nurkayevaa/autonlp-bert-covid-407910458) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3165 | None | 0 |
| 0.7005 | 0.7344 | 0 |
| 0.6975 | 0.6591 | 1 |
| 0.701 | 0.6363 | 2 |
| 0.707 | 0.6200 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [nurkayevaa/autonlp-bert-covid-407910458](https://huggingface.co/nurkayevaa/autonlp-bert-covid-407910458) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3165 | None | 0 |
| 0.7005 | 0.7344 | 0 |
| 0.6975 | 0.6591 | 1 |
| 0.701 | 0.6363 | 2 |
| 0.707 | 0.6200 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "nurkayevaa/autonlp-bert-covid-407910458", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "sentiment", "split": "validation", "args": "sentiment"}, "metrics": [{"type": "accuracy", "value": 0.707, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,812 |
north/t5_large_NCC
|
north
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-05-21T11:46:30Z |
2022-10-13T13:54:32+00:00
| 26 | 1 |
---
datasets:
- nbailab/NCC
- mc4
- wikipedia
language:
- false
- nn
- sv
- dk
- is
- en
license: apache-2.0
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>.
Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>,
må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2>
seg ned og lese den.
---
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|✔|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/large/norwegian_NCC_plus_English_t5x_large/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
| null |
Non_BioNLP
|
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|✔|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/large/norwegian_NCC_plus_English_t5x_large/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
{"datasets": ["nbailab/NCC", "mc4", "wikipedia"], "language": [false, "nn", "sv", "dk", "is", "en"], "license": "apache-2.0", "widget": [{"text": "<extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede."}, {"text": "På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den."}]}
|
task
|
[
"TRANSLATION"
] | 46,813 |
mmcquade11-test/reuters-summarization
|
mmcquade11-test
|
text2text-generation
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"en",
"dataset:mmcquade11/autonlp-data-reuters-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-11-30T21:43:51+00:00
| 16 | 0 |
---
datasets:
- mmcquade11/autonlp-data-reuters-summarization
language: en
tags:
- a
- u
- t
- o
- n
- l
- p
widget:
- text: I love AutoNLP 🤗
co2_eq_emissions: 286.4350821612984
---
This is an autoNLP model I trained on Reuters dataset
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 34018133
- CO2 Emissions (in grams): 286.4350821612984
## Validation Metrics
- Loss: 1.1805976629257202
- Rouge1: 55.4013
- Rouge2: 30.8004
- RougeL: 52.57
- RougeLsum: 52.6103
- Gen Len: 15.3458
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/mmcquade11/autonlp-reuters-summarization-34018133
```
| null |
Non_BioNLP
|
This is an autoNLP model I trained on Reuters dataset
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 34018133
- CO2 Emissions (in grams): 286.4350821612984
## Validation Metrics
- Loss: 1.1805976629257202
- Rouge1: 55.4013
- Rouge2: 30.8004
- RougeL: 52.57
- RougeLsum: 52.6103
- Gen Len: 15.3458
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/mmcquade11/autonlp-reuters-summarization-34018133
```
|
{"datasets": ["mmcquade11/autonlp-data-reuters-summarization"], "language": "en", "tags": ["a", "u", "t", "o", "n", "l", "p"], "widget": [{"text": "I love AutoNLP 🤗"}], "co2_eq_emissions": 286.4350821612984}
|
task
|
[
"SUMMARIZATION"
] | 46,814 |
mtsdurica/madlad400-3b-mt-Q4_0-GGUF
|
mtsdurica
|
translation
|
[
"transformers",
"gguf",
"text2text-generation",
"text-generation-inference",
"llama-cpp",
"gguf-my-repo",
"translation",
"multilingual",
"en",
"ru",
"es",
"fr",
"de",
"it",
"pt",
"pl",
"nl",
"vi",
"tr",
"sv",
"id",
"ro",
"cs",
"zh",
"hu",
"ja",
"th",
"fi",
"fa",
"uk",
"da",
"el",
"no",
"bg",
"sk",
"ko",
"ar",
"lt",
"ca",
"sl",
"he",
"et",
"lv",
"hi",
"sq",
"ms",
"az",
"sr",
"ta",
"hr",
"kk",
"is",
"ml",
"mr",
"te",
"af",
"gl",
"fil",
"be",
"mk",
"eu",
"bn",
"ka",
"mn",
"bs",
"uz",
"ur",
"sw",
"yue",
"ne",
"kn",
"kaa",
"gu",
"si",
"cy",
"eo",
"la",
"hy",
"ky",
"tg",
"ga",
"mt",
"my",
"km",
"tt",
"so",
"ku",
"ps",
"pa",
"rw",
"lo",
"ha",
"dv",
"fy",
"lb",
"ckb",
"mg",
"gd",
"am",
"ug",
"ht",
"grc",
"hmn",
"sd",
"jv",
"mi",
"tk",
"ceb",
"yi",
"ba",
"fo",
"or",
"xh",
"su",
"kl",
"ny",
"sm",
"sn",
"co",
"zu",
"ig",
"yo",
"pap",
"st",
"haw",
"as",
"oc",
"cv",
"lus",
"tet",
"gsw",
"sah",
"br",
"rm",
"sa",
"bo",
"om",
"se",
"ce",
"cnh",
"ilo",
"hil",
"udm",
"os",
"lg",
"ti",
"vec",
"ts",
"tyv",
"kbd",
"ee",
"iba",
"av",
"kha",
"to",
"tn",
"nso",
"fj",
"zza",
"ak",
"ada",
"otq",
"dz",
"bua",
"cfm",
"ln",
"chm",
"gn",
"krc",
"wa",
"hif",
"yua",
"srn",
"war",
"rom",
"bik",
"pam",
"sg",
"lu",
"ady",
"kbp",
"syr",
"ltg",
"myv",
"iso",
"kac",
"bho",
"ay",
"kum",
"qu",
"za",
"pag",
"ngu",
"ve",
"pck",
"zap",
"tyz",
"hui",
"bbc",
"tzo",
"tiv",
"ksd",
"gom",
"min",
"ang",
"nhe",
"bgp",
"nzi",
"nnb",
"nv",
"zxx",
"bci",
"kv",
"new",
"mps",
"alt",
"meu",
"bew",
"fon",
"iu",
"abt",
"mgh",
"mnw",
"tvl",
"dov",
"tlh",
"ho",
"kw",
"mrj",
"meo",
"crh",
"mbt",
"emp",
"ace",
"ium",
"mam",
"gym",
"mai",
"crs",
"pon",
"ubu",
"fip",
"quc",
"gv",
"kj",
"btx",
"ape",
"chk",
"rcf",
"shn",
"tzh",
"mdf",
"ppk",
"ss",
"gag",
"cab",
"kri",
"seh",
"ibb",
"tbz",
"bru",
"enq",
"ach",
"cuk",
"kmb",
"wo",
"kek",
"qub",
"tab",
"bts",
"kos",
"rwo",
"cak",
"tuc",
"bum",
"cjk",
"gil",
"stq",
"tsg",
"quh",
"mak",
"arn",
"ban",
"jiv",
"sja",
"yap",
"tcy",
"toj",
"twu",
"xal",
"amu",
"rmc",
"hus",
"nia",
"kjh",
"bm",
"guh",
"mas",
"acf",
"dtp",
"ksw",
"bzj",
"din",
"zne",
"mad",
"msi",
"mag",
"mkn",
"kg",
"lhu",
"ch",
"qvi",
"mh",
"djk",
"sus",
"mfe",
"srm",
"dyu",
"ctu",
"gui",
"pau",
"inb",
"bi",
"mni",
"guc",
"jam",
"wal",
"jac",
"bas",
"gor",
"skr",
"nyu",
"noa",
"sda",
"gub",
"nog",
"cni",
"teo",
"tdx",
"sxn",
"rki",
"nr",
"frp",
"alz",
"taj",
"lrc",
"cce",
"rn",
"jvn",
"hvn",
"nij",
"dwr",
"izz",
"msm",
"bus",
"ktu",
"chr",
"maz",
"tzj",
"suz",
"knj",
"bim",
"gvl",
"bqc",
"tca",
"pis",
"prk",
"laj",
"mel",
"qxr",
"niq",
"ahk",
"shp",
"hne",
"spp",
"koi",
"krj",
"quf",
"luz",
"agr",
"tsc",
"mqy",
"gof",
"gbm",
"miq",
"dje",
"awa",
"bjj",
"qvz",
"sjp",
"tll",
"raj",
"kjg",
"bgz",
"quy",
"cbk",
"akb",
"oj",
"ify",
"mey",
"ks",
"cac",
"brx",
"qup",
"syl",
"jax",
"ff",
"ber",
"tks",
"trp",
"mrw",
"adh",
"smt",
"srr",
"ffm",
"qvc",
"mtr",
"ann",
"aa",
"noe",
"nut",
"gyn",
"kwi",
"xmm",
"msb",
"dataset:allenai/MADLAD-400",
"base_model:google/madlad400-3b-mt",
"base_model:quantized:google/madlad400-3b-mt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-13T15:01:37Z |
2024-07-13T15:01:51+00:00
| 45 | 0 |
---
base_model: google/madlad400-3b-mt
datasets:
- allenai/MADLAD-400
language:
- multilingual
- en
- ru
- es
- fr
- de
- it
- pt
- pl
- nl
- vi
- tr
- sv
- id
- ro
- cs
- zh
- hu
- ja
- th
- fi
- fa
- uk
- da
- el
- 'no'
- bg
- sk
- ko
- ar
- lt
- ca
- sl
- he
- et
- lv
- hi
- sq
- ms
- az
- sr
- ta
- hr
- kk
- is
- ml
- mr
- te
- af
- gl
- fil
- be
- mk
- eu
- bn
- ka
- mn
- bs
- uz
- ur
- sw
- yue
- ne
- kn
- kaa
- gu
- si
- cy
- eo
- la
- hy
- ky
- tg
- ga
- mt
- my
- km
- tt
- so
- ku
- ps
- pa
- rw
- lo
- ha
- dv
- fy
- lb
- ckb
- mg
- gd
- am
- ug
- ht
- grc
- hmn
- sd
- jv
- mi
- tk
- ceb
- yi
- ba
- fo
- or
- xh
- su
- kl
- ny
- sm
- sn
- co
- zu
- ig
- yo
- pap
- st
- haw
- as
- oc
- cv
- lus
- tet
- gsw
- sah
- br
- rm
- sa
- bo
- om
- se
- ce
- cnh
- ilo
- hil
- udm
- os
- lg
- ti
- vec
- ts
- tyv
- kbd
- ee
- iba
- av
- kha
- to
- tn
- nso
- fj
- zza
- ak
- ada
- otq
- dz
- bua
- cfm
- ln
- chm
- gn
- krc
- wa
- hif
- yua
- srn
- war
- rom
- bik
- pam
- sg
- lu
- ady
- kbp
- syr
- ltg
- myv
- iso
- kac
- bho
- ay
- kum
- qu
- za
- pag
- ngu
- ve
- pck
- zap
- tyz
- hui
- bbc
- tzo
- tiv
- ksd
- gom
- min
- ang
- nhe
- bgp
- nzi
- nnb
- nv
- zxx
- bci
- kv
- new
- mps
- alt
- meu
- bew
- fon
- iu
- abt
- mgh
- mnw
- tvl
- dov
- tlh
- ho
- kw
- mrj
- meo
- crh
- mbt
- emp
- ace
- ium
- mam
- gym
- mai
- crs
- pon
- ubu
- fip
- quc
- gv
- kj
- btx
- ape
- chk
- rcf
- shn
- tzh
- mdf
- ppk
- ss
- gag
- cab
- kri
- seh
- ibb
- tbz
- bru
- enq
- ach
- cuk
- kmb
- wo
- kek
- qub
- tab
- bts
- kos
- rwo
- cak
- tuc
- bum
- cjk
- gil
- stq
- tsg
- quh
- mak
- arn
- ban
- jiv
- sja
- yap
- tcy
- toj
- twu
- xal
- amu
- rmc
- hus
- nia
- kjh
- bm
- guh
- mas
- acf
- dtp
- ksw
- bzj
- din
- zne
- mad
- msi
- mag
- mkn
- kg
- lhu
- ch
- qvi
- mh
- djk
- sus
- mfe
- srm
- dyu
- ctu
- gui
- pau
- inb
- bi
- mni
- guc
- jam
- wal
- jac
- bas
- gor
- skr
- nyu
- noa
- sda
- gub
- nog
- cni
- teo
- tdx
- sxn
- rki
- nr
- frp
- alz
- taj
- lrc
- cce
- rn
- jvn
- hvn
- nij
- dwr
- izz
- msm
- bus
- ktu
- chr
- maz
- tzj
- suz
- knj
- bim
- gvl
- bqc
- tca
- pis
- prk
- laj
- mel
- qxr
- niq
- ahk
- shp
- hne
- spp
- koi
- krj
- quf
- luz
- agr
- tsc
- mqy
- gof
- gbm
- miq
- dje
- awa
- bjj
- qvz
- sjp
- tll
- raj
- kjg
- bgz
- quy
- cbk
- akb
- oj
- ify
- mey
- ks
- cac
- brx
- qup
- syl
- jax
- ff
- ber
- tks
- trp
- mrw
- adh
- smt
- srr
- ffm
- qvc
- mtr
- ann
- kaa
- aa
- noe
- nut
- gyn
- kwi
- xmm
- msb
library_name: transformers
license: apache-2.0
pipeline_tag: translation
tags:
- text2text-generation
- text-generation-inference
- llama-cpp
- gguf-my-repo
widget:
- text: <2en> Como vai, amigo?
example_title: Translation to English
- text: <2de> Do you speak German?
example_title: Translation to German
---
# mtsdurica/madlad400-3b-mt-Q4_0-GGUF
This model was converted to GGUF format from [`google/madlad400-3b-mt`](https://huggingface.co/google/madlad400-3b-mt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/madlad400-3b-mt) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -c 2048
```
| null |
Non_BioNLP
|
# mtsdurica/madlad400-3b-mt-Q4_0-GGUF
This model was converted to GGUF format from [`google/madlad400-3b-mt`](https://huggingface.co/google/madlad400-3b-mt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/madlad400-3b-mt) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -c 2048
```
|
{"base_model": "google/madlad400-3b-mt", "datasets": ["allenai/MADLAD-400"], "language": ["multilingual", "en", "ru", "es", "fr", "de", "it", "pt", "pl", "nl", "vi", "tr", "sv", "id", "ro", "cs", "zh", "hu", "ja", "th", "fi", "fa", "uk", "da", "el", "no", "bg", "sk", "ko", "ar", "lt", "ca", "sl", "he", "et", "lv", "hi", "sq", "ms", "az", "sr", "ta", "hr", "kk", "is", "ml", "mr", "te", "af", "gl", "fil", "be", "mk", "eu", "bn", "ka", "mn", "bs", "uz", "ur", "sw", "yue", "ne", "kn", "kaa", "gu", "si", "cy", "eo", "la", "hy", "ky", "tg", "ga", "mt", "my", "km", "tt", "so", "ku", "ps", "pa", "rw", "lo", "ha", "dv", "fy", "lb", "ckb", "mg", "gd", "am", "ug", "ht", "grc", "hmn", "sd", "jv", "mi", "tk", "ceb", "yi", "ba", "fo", "or", "xh", "su", "kl", "ny", "sm", "sn", "co", "zu", "ig", "yo", "pap", "st", "haw", "as", "oc", "cv", "lus", "tet", "gsw", "sah", "br", "rm", "sa", "bo", "om", "se", "ce", "cnh", "ilo", "hil", "udm", "os", "lg", "ti", "vec", "ts", "tyv", "kbd", "ee", "iba", "av", "kha", "to", "tn", "nso", "fj", "zza", "ak", "ada", "otq", "dz", "bua", "cfm", "ln", "chm", "gn", "krc", "wa", "hif", "yua", "srn", "war", "rom", "bik", "pam", "sg", "lu", "ady", "kbp", "syr", "ltg", "myv", "iso", "kac", "bho", "ay", "kum", "qu", "za", "pag", "ngu", "ve", "pck", "zap", "tyz", "hui", "bbc", "tzo", "tiv", "ksd", "gom", "min", "ang", "nhe", "bgp", "nzi", "nnb", "nv", "zxx", "bci", "kv", "new", "mps", "alt", "meu", "bew", "fon", "iu", "abt", "mgh", "mnw", "tvl", "dov", "tlh", "ho", "kw", "mrj", "meo", "crh", "mbt", "emp", "ace", "ium", "mam", "gym", "mai", "crs", "pon", "ubu", "fip", "quc", "gv", "kj", "btx", "ape", "chk", "rcf", "shn", "tzh", "mdf", "ppk", "ss", "gag", "cab", "kri", "seh", "ibb", "tbz", "bru", "enq", "ach", "cuk", "kmb", "wo", "kek", "qub", "tab", "bts", "kos", "rwo", "cak", "tuc", "bum", "cjk", "gil", "stq", "tsg", "quh", "mak", "arn", "ban", "jiv", "sja", "yap", "tcy", "toj", "twu", "xal", "amu", "rmc", "hus", "nia", "kjh", "bm", "guh", "mas", "acf", "dtp", "ksw", "bzj", "din", "zne", "mad", "msi", "mag", "mkn", "kg", "lhu", "ch", "qvi", "mh", "djk", "sus", "mfe", "srm", "dyu", "ctu", "gui", "pau", "inb", "bi", "mni", "guc", "jam", "wal", "jac", "bas", "gor", "skr", "nyu", "noa", "sda", "gub", "nog", "cni", "teo", "tdx", "sxn", "rki", "nr", "frp", "alz", "taj", "lrc", "cce", "rn", "jvn", "hvn", "nij", "dwr", "izz", "msm", "bus", "ktu", "chr", "maz", "tzj", "suz", "knj", "bim", "gvl", "bqc", "tca", "pis", "prk", "laj", "mel", "qxr", "niq", "ahk", "shp", "hne", "spp", "koi", "krj", "quf", "luz", "agr", "tsc", "mqy", "gof", "gbm", "miq", "dje", "awa", "bjj", "qvz", "sjp", "tll", "raj", "kjg", "bgz", "quy", "cbk", "akb", "oj", "ify", "mey", "ks", "cac", "brx", "qup", "syl", "jax", "ff", "ber", "tks", "trp", "mrw", "adh", "smt", "srr", "ffm", "qvc", "mtr", "ann", "kaa", "aa", "noe", "nut", "gyn", "kwi", "xmm", "msb"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "translation", "tags": ["text2text-generation", "text-generation-inference", "llama-cpp", "gguf-my-repo"], "widget": [{"text": "<2en> Como vai, amigo?", "example_title": "Translation to English"}, {"text": "<2de> Do you speak German?", "example_title": "Translation to German"}]}
|
task
|
[
"TRANSLATION"
] | 46,816 |
gokulsrinivasagan/bert_base_lda_100_stsb
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_100",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_100",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-22T14:34:33Z |
2024-11-22T14:36:23+00:00
| 5 | 0 |
---
base_model: gokulsrinivasagan/bert_base_lda_100
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: bert_base_lda_100_stsb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- type: spearmanr
value: .nan
name: Spearmanr
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_100_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100](https://huggingface.co/gokulsrinivasagan/bert_base_lda_100) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3354
- Pearson: nan
- Spearmanr: nan
- Combined Score: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 6.0379 | 1.0 | 23 | 2.8532 | nan | nan | nan |
| 2.286 | 2.0 | 46 | 2.6158 | nan | nan | nan |
| 2.1985 | 3.0 | 69 | 2.3354 | nan | nan | nan |
| 2.1934 | 4.0 | 92 | 2.4655 | nan | nan | nan |
| 2.1771 | 5.0 | 115 | 2.5613 | nan | nan | nan |
| 2.1903 | 6.0 | 138 | 2.3448 | nan | nan | nan |
| 2.2164 | 7.0 | 161 | 3.0915 | nan | nan | nan |
| 2.2509 | 8.0 | 184 | 2.3759 | nan | nan | nan |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_100_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100](https://huggingface.co/gokulsrinivasagan/bert_base_lda_100) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3354
- Pearson: nan
- Spearmanr: nan
- Combined Score: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 6.0379 | 1.0 | 23 | 2.8532 | nan | nan | nan |
| 2.286 | 2.0 | 46 | 2.6158 | nan | nan | nan |
| 2.1985 | 3.0 | 69 | 2.3354 | nan | nan | nan |
| 2.1934 | 4.0 | 92 | 2.4655 | nan | nan | nan |
| 2.1771 | 5.0 | 115 | 2.5613 | nan | nan | nan |
| 2.1903 | 6.0 | 138 | 2.3448 | nan | nan | nan |
| 2.2164 | 7.0 | 161 | 3.0915 | nan | nan | nan |
| 2.2509 | 8.0 | 184 | 2.3759 | nan | nan | nan |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/bert_base_lda_100", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_base_lda_100_stsb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "args": "stsb"}, "metrics": [{"type": "spearmanr", "value": NaN, "name": "Spearmanr"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,817 |
SEBIS/code_trans_t5_base_code_documentation_generation_go
|
SEBIS
|
summarization
|
[
"transformers",
"pytorch",
"jax",
"t5",
"feature-extraction",
"summarization",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-06-23T04:12:04+00:00
| 128 | 0 |
---
tags:
- summarization
widget:
- text: func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot
&& pr . Match >= pr . PendingSnapshot }
---
# CodeTrans model for code documentation generation go
Pretrained model on programming language go using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus go dataset.
## Intended uses & limitations
The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_go"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_go", skip_special_tokens=True),
device=0
)
tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/go/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
| null |
Non_BioNLP
|
# CodeTrans model for code documentation generation go
Pretrained model on programming language go using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus go dataset.
## Intended uses & limitations
The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_go"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_go", skip_special_tokens=True),
device=0
)
tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/go/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
|
task
|
[
"SUMMARIZATION"
] | 46,818 |
TransferGraph/YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602",
"base_model:adapter:YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-27T17:33:05Z |
2024-02-29T13:38:35+00:00
| 0 | 0 |
---
base_model: YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.47643979057591623
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602](https://huggingface.co/YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5246 | None | 0 |
| 0.5257 | 0.7225 | 0 |
| 0.4743 | 0.7059 | 1 |
| 0.4743 | 0.6978 | 2 |
| 0.4775 | 0.6971 | 3 |
| 0.4764 | 0.6953 | 4 |
| 0.4764 | 0.6959 | 5 |
| 0.4764 | 0.6963 | 6 |
| 0.4764 | 0.6956 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602](https://huggingface.co/YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5246 | None | 0 |
| 0.5257 | 0.7225 | 0 |
| 0.4743 | 0.7059 | 1 |
| 0.4743 | 0.6978 | 2 |
| 0.4775 | 0.6971 | 3 |
| 0.4764 | 0.6953 | 4 |
| 0.4764 | 0.6959 | 5 |
| 0.4764 | 0.6963 | 6 |
| 0.4764 | 0.6956 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "irony", "split": "validation", "args": "irony"}, "metrics": [{"type": "accuracy", "value": 0.47643979057591623, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,819 |
mradermacher/airoboros-34b-3.3-i1-GGUF
|
mradermacher
| null |
[
"transformers",
"gguf",
"en",
"dataset:jondurbin/airoboros-3.2",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mattpscott/airoboros-summarization",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:jondurbin/airoboros-34b-3.3",
"base_model:quantized:jondurbin/airoboros-34b-3.3",
"license:other",
"endpoints_compatible",
"region:us"
] | 2024-04-03T02:52:22Z |
2024-05-06T05:21:32+00:00
| 490 | 1 |
---
base_model: jondurbin/airoboros-34b-3.3
datasets:
- jondurbin/airoboros-3.2
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- jondurbin/gutenberg-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- glaiveai/glaive-function-calling-v2
- grimulkan/LimaRP-augmented
- piqa
- Vezora/Tested-22k-Python-Alpaca
- mattpscott/airoboros-summarization
- unalignment/toxic-dpo-v0.2
language:
- en
library_name: transformers
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-34b-3.3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| null |
Non_BioNLP
|
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-34b-3.3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"base_model": "jondurbin/airoboros-34b-3.3", "datasets": ["jondurbin/airoboros-3.2", "bluemoon-fandom-1-1-rp-cleaned", "boolq", "jondurbin/gutenberg-dpo-v0.1", "LDJnr/Capybara", "jondurbin/cinematika-v0.1", "glaiveai/glaive-function-calling-v2", "grimulkan/LimaRP-augmented", "piqa", "Vezora/Tested-22k-Python-Alpaca", "mattpscott/airoboros-summarization", "unalignment/toxic-dpo-v0.2"], "language": ["en"], "library_name": "transformers", "license": "other", "license_name": "yi-license", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE", "quantized_by": "mradermacher"}
|
task
|
[
"SUMMARIZATION"
] | 46,822 |
YxBxRyXJx/bge-base-financial-matryoshka
|
YxBxRyXJx
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5600",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-15T10:18:23Z |
2024-11-15T10:19:00+00:00
| 6 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5600
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: The Federal Energy Regulatory Commission (“FERC”) has also taken
steps to enable the participation of energy storage in wholesale energy markets.
sentences:
- What segment-specific regulations apply to CVS Health Corporation's Pharmacy &
Consumer Wellness segment?
- What types of contracts does the company have for its health insurance plans,
and how does premium revenue recognition function under these contracts?
- What federal agency has taken steps to facilitate energy storage participation
in wholesale energy markets?
- source_sentence: Investments in subsidiaries and partnerships which we do not control
but have significant influence are accounted for under the equity method.
sentences:
- How does the company aim to protect the health and well-being of the communities
it operates in?
- What are the key factors affecting the evaluation of the Economic Value of Equity
(EVE) at the Charles Schwab Corporation?
- What accounting method does the company use to account for investments in subsidiaries
and partnerships where it does not control but has significant influence?
- source_sentence: Item 8 of IBM's 2023 Annual Report includes financial statements
and supplementary data spanning pages 44 through 121.
sentences:
- What entities are included among the Guarantors that guarantee each other’s debt
securities as described in Comcast’s 2023 Annual Report?
- What uncertainties exist regarding projections of future cash needs and cash flows?
- How many pages in IBM's 2023 Annual Report to Stockholders are dedicated to financial
statements and supplementary data?
- source_sentence: 'Our compensation philosophy creates the framework for our rewards
strategy, which focuses on five key elements: pay-for-performance, external market-based
research, internal equity, fiscal responsibility, and legal compliance.'
sentences:
- What financial instruments does the company invest in that are sensitive to interest
rates?
- What elements are included in the company's compensation programs?
- What is the expected maximum potential loss from hurricane events for Chubb as
of the end of 2023?
- source_sentence: Outside of the U.S., many countries have established vehicle safety
standards and regulations and are likely to adopt additional, more stringent requirements
in the future.
sentences:
- What percentage of the company's sales categories in fiscal 2023 were failure
and maintenance related?
- What competitive factors influence Chubb International's international operations?
- What changes are occurring with vehicle safety regulations outside of the U.S.?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6885714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8278571428571428
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8728571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9164285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6885714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.275952380952381
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17457142857142854
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09164285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6885714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8278571428571428
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8728571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9164285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8042449175537354
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.768181405895692
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7712863400405022
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6864285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8292857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8728571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9135714285714286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6864285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2764285714285714
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17457142857142854
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09135714285714285
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6864285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8292857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8728571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9135714285714286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8024352620004916
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7665753968253971
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7697268174707245
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.68
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.825
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8635714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9042857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.68
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.275
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1727142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09042857142857141
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.68
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.825
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8635714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9042857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7955058944909328
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7603066893424041
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7637281364444245
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6621428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7964285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8457142857142858
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8907142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6621428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2654761904761905
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16914285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08907142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6621428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7964285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8457142857142858
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8907142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7772894744328753
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7408999433106581
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7449491476160666
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6285714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7635714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8057142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8642857142857143
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6285714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2545238095238095
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16114285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08642857142857142
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6285714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7635714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8057142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8642857142857143
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7447153698860624
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7067037981859416
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7112341263725279
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("YxBxRyXJx/bge-base-financial-matryoshka")
# Run inference
sentences = [
'Outside of the U.S., many countries have established vehicle safety standards and regulations and are likely to adopt additional, more stringent requirements in the future.',
'What changes are occurring with vehicle safety regulations outside of the U.S.?',
"What competitive factors influence Chubb International's international operations?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_accuracy@3 | 0.8279 | 0.8293 | 0.825 | 0.7964 | 0.7636 |
| cosine_accuracy@5 | 0.8729 | 0.8729 | 0.8636 | 0.8457 | 0.8057 |
| cosine_accuracy@10 | 0.9164 | 0.9136 | 0.9043 | 0.8907 | 0.8643 |
| cosine_precision@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_precision@3 | 0.276 | 0.2764 | 0.275 | 0.2655 | 0.2545 |
| cosine_precision@5 | 0.1746 | 0.1746 | 0.1727 | 0.1691 | 0.1611 |
| cosine_precision@10 | 0.0916 | 0.0914 | 0.0904 | 0.0891 | 0.0864 |
| cosine_recall@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_recall@3 | 0.8279 | 0.8293 | 0.825 | 0.7964 | 0.7636 |
| cosine_recall@5 | 0.8729 | 0.8729 | 0.8636 | 0.8457 | 0.8057 |
| cosine_recall@10 | 0.9164 | 0.9136 | 0.9043 | 0.8907 | 0.8643 |
| **cosine_ndcg@10** | **0.8042** | **0.8024** | **0.7955** | **0.7773** | **0.7447** |
| cosine_mrr@10 | 0.7682 | 0.7666 | 0.7603 | 0.7409 | 0.7067 |
| cosine_map@100 | 0.7713 | 0.7697 | 0.7637 | 0.7449 | 0.7112 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 5,600 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 44.34 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 20.46 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Z-net is AutoZone's proprietary electronic catalog and enables AutoZoners to efficiently look up parts that customers need, providing complete job solutions and information based on vehicle specifics. It also tracks inventory availability across different locations.</code> | <code>What is the purpose of Z-net in AutoZone stores?</code> |
| <code>In 2023, the allowance for loan and lease losses was $13.3 billion on total loans and leases of $1,050.2 billion, which excludes loans accounted for under the fair value option.</code> | <code>What was the total amount of loans and leases at Bank of America by the end of 2023, excluding those accounted for under the fair value option?</code> |
| <code>We significantly improved features in Service Manager™, which installers can use from their mobile devices to get service instantly. We continue to provide 24/7 support for installers and Enphase system owners globally across our phone, online chat, and email communications channel. We continue to train our customer service agents with a goal of reducing average customer wait times to under one minute, and we continue to expand our network of field service technicians in the United States, Europe and Australia to provide direct homeowner assistance.</code> | <code>What measures has Enphase Energy, Inc. taken to improve customer service in 2023?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.9143 | 10 | 1.4537 | 0.7992 | 0.7952 | 0.7900 | 0.7703 | 0.7350 |
| **1.8286** | **20** | **0.6857** | **0.8042** | **0.8024** | **0.7955** | **0.7773** | **0.7447** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("YxBxRyXJx/bge-base-financial-matryoshka")
# Run inference
sentences = [
'Outside of the U.S., many countries have established vehicle safety standards and regulations and are likely to adopt additional, more stringent requirements in the future.',
'What changes are occurring with vehicle safety regulations outside of the U.S.?',
"What competitive factors influence Chubb International's international operations?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_accuracy@3 | 0.8279 | 0.8293 | 0.825 | 0.7964 | 0.7636 |
| cosine_accuracy@5 | 0.8729 | 0.8729 | 0.8636 | 0.8457 | 0.8057 |
| cosine_accuracy@10 | 0.9164 | 0.9136 | 0.9043 | 0.8907 | 0.8643 |
| cosine_precision@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_precision@3 | 0.276 | 0.2764 | 0.275 | 0.2655 | 0.2545 |
| cosine_precision@5 | 0.1746 | 0.1746 | 0.1727 | 0.1691 | 0.1611 |
| cosine_precision@10 | 0.0916 | 0.0914 | 0.0904 | 0.0891 | 0.0864 |
| cosine_recall@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_recall@3 | 0.8279 | 0.8293 | 0.825 | 0.7964 | 0.7636 |
| cosine_recall@5 | 0.8729 | 0.8729 | 0.8636 | 0.8457 | 0.8057 |
| cosine_recall@10 | 0.9164 | 0.9136 | 0.9043 | 0.8907 | 0.8643 |
| **cosine_ndcg@10** | **0.8042** | **0.8024** | **0.7955** | **0.7773** | **0.7447** |
| cosine_mrr@10 | 0.7682 | 0.7666 | 0.7603 | 0.7409 | 0.7067 |
| cosine_map@100 | 0.7713 | 0.7697 | 0.7637 | 0.7449 | 0.7112 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 5,600 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 44.34 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 20.46 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Z-net is AutoZone's proprietary electronic catalog and enables AutoZoners to efficiently look up parts that customers need, providing complete job solutions and information based on vehicle specifics. It also tracks inventory availability across different locations.</code> | <code>What is the purpose of Z-net in AutoZone stores?</code> |
| <code>In 2023, the allowance for loan and lease losses was $13.3 billion on total loans and leases of $1,050.2 billion, which excludes loans accounted for under the fair value option.</code> | <code>What was the total amount of loans and leases at Bank of America by the end of 2023, excluding those accounted for under the fair value option?</code> |
| <code>We significantly improved features in Service Manager™, which installers can use from their mobile devices to get service instantly. We continue to provide 24/7 support for installers and Enphase system owners globally across our phone, online chat, and email communications channel. We continue to train our customer service agents with a goal of reducing average customer wait times to under one minute, and we continue to expand our network of field service technicians in the United States, Europe and Australia to provide direct homeowner assistance.</code> | <code>What measures has Enphase Energy, Inc. taken to improve customer service in 2023?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.9143 | 10 | 1.4537 | 0.7992 | 0.7952 | 0.7900 | 0.7703 | 0.7350 |
| **1.8286** | **20** | **0.6857** | **0.8042** | **0.8024** | **0.7955** | **0.7773** | **0.7447** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5600", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "The Federal Energy Regulatory Commission (“FERC”) has also taken steps to enable the participation of energy storage in wholesale energy markets.", "sentences": ["What segment-specific regulations apply to CVS Health Corporation's Pharmacy & Consumer Wellness segment?", "What types of contracts does the company have for its health insurance plans, and how does premium revenue recognition function under these contracts?", "What federal agency has taken steps to facilitate energy storage participation in wholesale energy markets?"]}, {"source_sentence": "Investments in subsidiaries and partnerships which we do not control but have significant influence are accounted for under the equity method.", "sentences": ["How does the company aim to protect the health and well-being of the communities it operates in?", "What are the key factors affecting the evaluation of the Economic Value of Equity (EVE) at the Charles Schwab Corporation?", "What accounting method does the company use to account for investments in subsidiaries and partnerships where it does not control but has significant influence?"]}, {"source_sentence": "Item 8 of IBM's 2023 Annual Report includes financial statements and supplementary data spanning pages 44 through 121.", "sentences": ["What entities are included among the Guarantors that guarantee each other’s debt securities as described in Comcast’s 2023 Annual Report?", "What uncertainties exist regarding projections of future cash needs and cash flows?", "How many pages in IBM's 2023 Annual Report to Stockholders are dedicated to financial statements and supplementary data?"]}, {"source_sentence": "Our compensation philosophy creates the framework for our rewards strategy, which focuses on five key elements: pay-for-performance, external market-based research, internal equity, fiscal responsibility, and legal compliance.", "sentences": ["What financial instruments does the company invest in that are sensitive to interest rates?", "What elements are included in the company's compensation programs?", "What is the expected maximum potential loss from hurricane events for Chubb as of the end of 2023?"]}, {"source_sentence": "Outside of the U.S., many countries have established vehicle safety standards and regulations and are likely to adopt additional, more stringent requirements in the future.", "sentences": ["What percentage of the company's sales categories in fiscal 2023 were failure and maintenance related?", "What competitive factors influence Chubb International's international operations?", "What changes are occurring with vehicle safety regulations outside of the U.S.?"]}], "model-index": [{"name": "BGE base Financial Matryoshka", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6885714285714286, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8278571428571428, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8728571428571429, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9164285714285715, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6885714285714286, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.275952380952381, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17457142857142854, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09164285714285714, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6885714285714286, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8278571428571428, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8728571428571429, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9164285714285715, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8042449175537354, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.768181405895692, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7712863400405022, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6864285714285714, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8292857142857143, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8728571428571429, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9135714285714286, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6864285714285714, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2764285714285714, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17457142857142854, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09135714285714285, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6864285714285714, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8292857142857143, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8728571428571429, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9135714285714286, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8024352620004916, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7665753968253971, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7697268174707245, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.68, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.825, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8635714285714285, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9042857142857142, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.68, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.275, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1727142857142857, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09042857142857141, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.68, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.825, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8635714285714285, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9042857142857142, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7955058944909328, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7603066893424041, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7637281364444245, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6621428571428571, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7964285714285714, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8457142857142858, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8907142857142857, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6621428571428571, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2654761904761905, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.16914285714285712, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08907142857142857, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6621428571428571, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7964285714285714, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8457142857142858, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8907142857142857, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7772894744328753, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7408999433106581, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7449491476160666, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6285714285714286, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7635714285714286, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8057142857142857, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8642857142857143, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6285714285714286, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2545238095238095, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.16114285714285712, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08642857142857142, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6285714285714286, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7635714285714286, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8057142857142857, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8642857142857143, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7447153698860624, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7067037981859416, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7112341263725279, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,823 |
nahyeonkang/ai.keepit
|
nahyeonkang
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:nsmc",
"base_model:beomi/kcbert-base",
"base_model:finetune:beomi/kcbert-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-03T16:21:01Z |
2023-08-03T17:56:35+00:00
| 13 | 0 |
---
base_model: beomi/kcbert-base
datasets:
- nsmc
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: ai.keepit
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: nsmc
type: nsmc
config: default
split: test
args: default
metrics:
- type: accuracy
value: 0.90204
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai.keepit
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the nsmc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3046
- Accuracy: 0.9020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2715 | 1.0 | 9375 | 0.2604 | 0.8957 |
| 0.2137 | 2.0 | 18750 | 0.2677 | 0.9003 |
| 0.1655 | 3.0 | 28125 | 0.3046 | 0.9020 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai.keepit
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the nsmc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3046
- Accuracy: 0.9020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2715 | 1.0 | 9375 | 0.2604 | 0.8957 |
| 0.2137 | 2.0 | 18750 | 0.2677 | 0.9003 |
| 0.1655 | 3.0 | 28125 | 0.3046 | 0.9020 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
{"base_model": "beomi/kcbert-base", "datasets": ["nsmc"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "ai.keepit", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "nsmc", "type": "nsmc", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.90204, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,824 |
google/t5-large-lm-adapt
|
google
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"t5-lm-adapt",
"en",
"dataset:c4",
"arxiv:2002.05202",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-01-24T16:52:08+00:00
| 2,748 | 8 |
---
datasets:
- c4
language: en
license: apache-2.0
tags:
- t5-lm-adapt
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted
## Version 1.1 - LM-Adapted
[T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-large):
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202).
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`.
and is pretrained on both the denoising and language modeling objective.
More specifically, this checkpoint is initialized from [T5 Version 1.1 - Large](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-large)
and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf).
This adaptation improves the ability of the model to be used for prompt tuning.
**Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp).
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

| null |
Non_BioNLP
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted
## Version 1.1 - LM-Adapted
[T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-large):
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202).
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`.
and is pretrained on both the denoising and language modeling objective.
More specifically, this checkpoint is initialized from [T5 Version 1.1 - Large](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-large)
and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf).
This adaptation improves the ability of the model to be used for prompt tuning.
**Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp).
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
{"datasets": ["c4"], "language": "en", "license": "apache-2.0", "tags": ["t5-lm-adapt"]}
|
task
|
[
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 46,825 |
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
|
ahmeddbahaa
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"Abstractive Summarization",
"ar",
"generated_from_trainer",
"dataset:xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-06-08T16:23:58Z |
2022-06-08T22:22:19+00:00
| 26 | 1 |
---
datasets:
- xlsum
tags:
- mt5
- summarization
- Abstractive Summarization
- ar
- generated_from_trainer
model-index:
- name: mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
This model is a fine-tuned version of [ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa](https://huggingface.co/ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6352
- Rouge-1: 28.69
- Rouge-2: 11.6
- Rouge-l: 24.29
- Gen Len: 41.37
- Bertscore: 73.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
This model is a fine-tuned version of [ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa](https://huggingface.co/ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6352
- Rouge-1: 28.69
- Rouge-2: 11.6
- Rouge-l: 24.29
- Gen Len: 41.37
- Bertscore: 73.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
{"datasets": ["xlsum"], "tags": ["mt5", "summarization", "Abstractive Summarization", "ar", "generated_from_trainer"], "model-index": [{"name": "mT5_multilingual_XLSum-finetuned-fa-finetuned-ar", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 46,826 |
gokuls/bert_uncased_L-10_H-768_A-12_emotion
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:google/bert_uncased_L-10_H-768_A-12",
"base_model:finetune:google/bert_uncased_L-10_H-768_A-12",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-06T16:51:50Z |
2023-10-06T16:59:05+00:00
| 7 | 0 |
---
base_model: google/bert_uncased_L-10_H-768_A-12
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_uncased_L-10_H-768_A-12_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.941
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-10_H-768_A-12_emotion
This model is a fine-tuned version of [google/bert_uncased_L-10_H-768_A-12](https://huggingface.co/google/bert_uncased_L-10_H-768_A-12) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2108
- Accuracy: 0.941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4839 | 1.0 | 250 | 0.1626 | 0.9375 |
| 0.1446 | 2.0 | 500 | 0.1273 | 0.938 |
| 0.1018 | 3.0 | 750 | 0.1331 | 0.9375 |
| 0.0835 | 4.0 | 1000 | 0.1562 | 0.9395 |
| 0.0688 | 5.0 | 1250 | 0.1724 | 0.94 |
| 0.0487 | 6.0 | 1500 | 0.2108 | 0.941 |
| 0.0315 | 7.0 | 1750 | 0.2439 | 0.9375 |
| 0.0201 | 8.0 | 2000 | 0.2511 | 0.9395 |
| 0.0128 | 9.0 | 2250 | 0.2772 | 0.934 |
| 0.0086 | 10.0 | 2500 | 0.2811 | 0.939 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-10_H-768_A-12_emotion
This model is a fine-tuned version of [google/bert_uncased_L-10_H-768_A-12](https://huggingface.co/google/bert_uncased_L-10_H-768_A-12) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2108
- Accuracy: 0.941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4839 | 1.0 | 250 | 0.1626 | 0.9375 |
| 0.1446 | 2.0 | 500 | 0.1273 | 0.938 |
| 0.1018 | 3.0 | 750 | 0.1331 | 0.9375 |
| 0.0835 | 4.0 | 1000 | 0.1562 | 0.9395 |
| 0.0688 | 5.0 | 1250 | 0.1724 | 0.94 |
| 0.0487 | 6.0 | 1500 | 0.2108 | 0.941 |
| 0.0315 | 7.0 | 1750 | 0.2439 | 0.9375 |
| 0.0201 | 8.0 | 2000 | 0.2511 | 0.9395 |
| 0.0128 | 9.0 | 2250 | 0.2772 | 0.934 |
| 0.0086 | 10.0 | 2500 | 0.2811 | 0.939 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "google/bert_uncased_L-10_H-768_A-12", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_uncased_L-10_H-768_A-12_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.941, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,827 |
mspy/twitter-paraphrase-embeddings
|
mspy
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:13063",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-28T12:26:58Z |
2024-07-28T12:29:07+00:00
| 5 | 0 |
---
base_model: sentence-transformers/all-mpnet-base-v2
datasets: []
language: []
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:13063
- loss:CosineSimilarityLoss
widget:
- source_sentence: I cant wait to leave Chicago
sentences:
- This is the shit Chicago needs to be recognized for not Keef
- is candice singing again tonight
- half time Chelsea were losing 10
- source_sentence: Andre miller best lobbing pg in the game
sentences:
- Am I the only one who dont get Amber alert
- Backstrom hurt in warmup Harding could start
- Andre miller is even slower in person
- source_sentence: Bayless couldve dunked that from the free throw
sentences:
- but what great finger roll by Bayless
- Wow Bayless has to make EspnSCTop with that end of 3rd
- i mean calum u didnt follow
- source_sentence: Backstrom Hurt in warmups Harding gets the start
sentences:
- Should I go to Nashville or Chicago for my 17th birthday
- I hate Chelsea possibly more than most
- Of course Backstrom would get injured during warmups
- source_sentence: Calum I love you plz follow me
sentences:
- CALUM PLEASE BE MY FIRST CELEBRITY TO FOLLOW ME
- Walking around downtown Chicago in a dress and listening to the new Iggy Pop
- I think Candice has what it takes to win American Idol AND Angie too
model-index:
- name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.6949485250178733
name: Pearson Cosine
- type: spearman_cosine
value: 0.6626359968437283
name: Spearman Cosine
- type: pearson_manhattan
value: 0.688092975176289
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6630998028133662
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6880277270034267
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6626358741747785
name: Spearman Euclidean
- type: pearson_dot
value: 0.694948520847878
name: Pearson Dot
- type: spearman_dot
value: 0.6626359082695851
name: Spearman Dot
- type: pearson_max
value: 0.6949485250178733
name: Pearson Max
- type: spearman_max
value: 0.6630998028133662
name: Spearman Max
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mspy/twitter-paraphrase-embeddings")
# Run inference
sentences = [
'Calum I love you plz follow me',
'CALUM PLEASE BE MY FIRST CELEBRITY TO FOLLOW ME',
'Walking around downtown Chicago in a dress and listening to the new Iggy Pop',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6949 |
| **spearman_cosine** | **0.6626** |
| pearson_manhattan | 0.6881 |
| spearman_manhattan | 0.6631 |
| pearson_euclidean | 0.688 |
| spearman_euclidean | 0.6626 |
| pearson_dot | 0.6949 |
| spearman_dot | 0.6626 |
| pearson_max | 0.6949 |
| spearman_max | 0.6631 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 13,063 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 tokens</li><li>mean: 11.16 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.31 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------|:-------------------------------------------------------------------|:-----------------|
| <code>EJ Manuel the 1st QB to go in this draft</code> | <code>But my bro from the 757 EJ Manuel is the 1st QB gone</code> | <code>1.0</code> |
| <code>EJ Manuel the 1st QB to go in this draft</code> | <code>Can believe EJ Manuel went as the 1st QB in the draft</code> | <code>1.0</code> |
| <code>EJ Manuel the 1st QB to go in this draft</code> | <code>EJ MANUEL IS THE 1ST QB what</code> | <code>0.6</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 4,727 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.04 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.22 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:---------------------------------------------------------------|:------------------------------------------------------------------|:-----------------|
| <code>A Walk to Remember is the definition of true love</code> | <code>A Walk to Remember is on and Im in town and Im upset</code> | <code>0.2</code> |
| <code>A Walk to Remember is the definition of true love</code> | <code>A Walk to Remember is the cutest thing</code> | <code>0.6</code> |
| <code>A Walk to Remember is the definition of true love</code> | <code>A walk to remember is on ABC family youre welcome</code> | <code>0.2</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | spearman_cosine |
|:------:|:----:|:-------------:|:------:|:---------------:|
| 0.1225 | 100 | - | 0.0729 | 0.6058 |
| 0.2449 | 200 | - | 0.0646 | 0.6340 |
| 0.3674 | 300 | - | 0.0627 | 0.6397 |
| 0.4899 | 400 | - | 0.0621 | 0.6472 |
| 0.6124 | 500 | 0.0627 | 0.0626 | 0.6496 |
| 0.7348 | 600 | - | 0.0621 | 0.6446 |
| 0.8573 | 700 | - | 0.0593 | 0.6695 |
| 0.9798 | 800 | - | 0.0636 | 0.6440 |
| 1.1023 | 900 | - | 0.0618 | 0.6525 |
| 1.2247 | 1000 | 0.0383 | 0.0604 | 0.6639 |
| 1.3472 | 1100 | - | 0.0608 | 0.6590 |
| 1.4697 | 1200 | - | 0.0620 | 0.6504 |
| 1.5922 | 1300 | - | 0.0617 | 0.6467 |
| 1.7146 | 1400 | - | 0.0615 | 0.6574 |
| 1.8371 | 1500 | 0.0293 | 0.0622 | 0.6536 |
| 1.9596 | 1600 | - | 0.0609 | 0.6599 |
| 2.0821 | 1700 | - | 0.0605 | 0.6658 |
| 2.2045 | 1800 | - | 0.0615 | 0.6588 |
| 2.3270 | 1900 | - | 0.0615 | 0.6575 |
| 2.4495 | 2000 | 0.0215 | 0.0614 | 0.6598 |
| 2.5720 | 2100 | - | 0.0603 | 0.6681 |
| 2.6944 | 2200 | - | 0.0606 | 0.6669 |
| 2.8169 | 2300 | - | 0.0605 | 0.6642 |
| 2.9394 | 2400 | - | 0.0606 | 0.6630 |
| 3.0618 | 2500 | 0.018 | 0.0611 | 0.6616 |
| 3.1843 | 2600 | - | 0.0611 | 0.6619 |
| 3.3068 | 2700 | - | 0.0611 | 0.6608 |
| 3.4293 | 2800 | - | 0.0608 | 0.6632 |
| 3.5517 | 2900 | - | 0.0608 | 0.6623 |
| 3.6742 | 3000 | 0.014 | 0.0615 | 0.6596 |
| 3.7967 | 3100 | - | 0.0612 | 0.6616 |
| 3.9192 | 3200 | - | 0.0610 | 0.6626 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.43.3
- PyTorch: 2.4.0+cu121
- Accelerate: 0.33.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mspy/twitter-paraphrase-embeddings")
# Run inference
sentences = [
'Calum I love you plz follow me',
'CALUM PLEASE BE MY FIRST CELEBRITY TO FOLLOW ME',
'Walking around downtown Chicago in a dress and listening to the new Iggy Pop',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6949 |
| **spearman_cosine** | **0.6626** |
| pearson_manhattan | 0.6881 |
| spearman_manhattan | 0.6631 |
| pearson_euclidean | 0.688 |
| spearman_euclidean | 0.6626 |
| pearson_dot | 0.6949 |
| spearman_dot | 0.6626 |
| pearson_max | 0.6949 |
| spearman_max | 0.6631 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 13,063 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 tokens</li><li>mean: 11.16 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.31 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------|:-------------------------------------------------------------------|:-----------------|
| <code>EJ Manuel the 1st QB to go in this draft</code> | <code>But my bro from the 757 EJ Manuel is the 1st QB gone</code> | <code>1.0</code> |
| <code>EJ Manuel the 1st QB to go in this draft</code> | <code>Can believe EJ Manuel went as the 1st QB in the draft</code> | <code>1.0</code> |
| <code>EJ Manuel the 1st QB to go in this draft</code> | <code>EJ MANUEL IS THE 1ST QB what</code> | <code>0.6</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 4,727 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.04 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.22 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:---------------------------------------------------------------|:------------------------------------------------------------------|:-----------------|
| <code>A Walk to Remember is the definition of true love</code> | <code>A Walk to Remember is on and Im in town and Im upset</code> | <code>0.2</code> |
| <code>A Walk to Remember is the definition of true love</code> | <code>A Walk to Remember is the cutest thing</code> | <code>0.6</code> |
| <code>A Walk to Remember is the definition of true love</code> | <code>A walk to remember is on ABC family youre welcome</code> | <code>0.2</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | spearman_cosine |
|:------:|:----:|:-------------:|:------:|:---------------:|
| 0.1225 | 100 | - | 0.0729 | 0.6058 |
| 0.2449 | 200 | - | 0.0646 | 0.6340 |
| 0.3674 | 300 | - | 0.0627 | 0.6397 |
| 0.4899 | 400 | - | 0.0621 | 0.6472 |
| 0.6124 | 500 | 0.0627 | 0.0626 | 0.6496 |
| 0.7348 | 600 | - | 0.0621 | 0.6446 |
| 0.8573 | 700 | - | 0.0593 | 0.6695 |
| 0.9798 | 800 | - | 0.0636 | 0.6440 |
| 1.1023 | 900 | - | 0.0618 | 0.6525 |
| 1.2247 | 1000 | 0.0383 | 0.0604 | 0.6639 |
| 1.3472 | 1100 | - | 0.0608 | 0.6590 |
| 1.4697 | 1200 | - | 0.0620 | 0.6504 |
| 1.5922 | 1300 | - | 0.0617 | 0.6467 |
| 1.7146 | 1400 | - | 0.0615 | 0.6574 |
| 1.8371 | 1500 | 0.0293 | 0.0622 | 0.6536 |
| 1.9596 | 1600 | - | 0.0609 | 0.6599 |
| 2.0821 | 1700 | - | 0.0605 | 0.6658 |
| 2.2045 | 1800 | - | 0.0615 | 0.6588 |
| 2.3270 | 1900 | - | 0.0615 | 0.6575 |
| 2.4495 | 2000 | 0.0215 | 0.0614 | 0.6598 |
| 2.5720 | 2100 | - | 0.0603 | 0.6681 |
| 2.6944 | 2200 | - | 0.0606 | 0.6669 |
| 2.8169 | 2300 | - | 0.0605 | 0.6642 |
| 2.9394 | 2400 | - | 0.0606 | 0.6630 |
| 3.0618 | 2500 | 0.018 | 0.0611 | 0.6616 |
| 3.1843 | 2600 | - | 0.0611 | 0.6619 |
| 3.3068 | 2700 | - | 0.0611 | 0.6608 |
| 3.4293 | 2800 | - | 0.0608 | 0.6632 |
| 3.5517 | 2900 | - | 0.0608 | 0.6623 |
| 3.6742 | 3000 | 0.014 | 0.0615 | 0.6596 |
| 3.7967 | 3100 | - | 0.0612 | 0.6616 |
| 3.9192 | 3200 | - | 0.0610 | 0.6626 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.43.3
- PyTorch: 2.4.0+cu121
- Accelerate: 0.33.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/all-mpnet-base-v2", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:13063", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "I cant wait to leave Chicago", "sentences": ["This is the shit Chicago needs to be recognized for not Keef", "is candice singing again tonight", "half time Chelsea were losing 10"]}, {"source_sentence": "Andre miller best lobbing pg in the game", "sentences": ["Am I the only one who dont get Amber alert", "Backstrom hurt in warmup Harding could start", "Andre miller is even slower in person"]}, {"source_sentence": "Bayless couldve dunked that from the free throw", "sentences": ["but what great finger roll by Bayless", "Wow Bayless has to make EspnSCTop with that end of 3rd", "i mean calum u didnt follow"]}, {"source_sentence": "Backstrom Hurt in warmups Harding gets the start", "sentences": ["Should I go to Nashville or Chicago for my 17th birthday", "I hate Chelsea possibly more than most", "Of course Backstrom would get injured during warmups"]}, {"source_sentence": "Calum I love you plz follow me", "sentences": ["CALUM PLEASE BE MY FIRST CELEBRITY TO FOLLOW ME", "Walking around downtown Chicago in a dress and listening to the new Iggy Pop", "I think Candice has what it takes to win American Idol AND Angie too"]}], "model-index": [{"name": "SentenceTransformer based on sentence-transformers/all-mpnet-base-v2", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "pearson_cosine", "value": 0.6949485250178733, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.6626359968437283, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.688092975176289, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.6630998028133662, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.6880277270034267, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.6626358741747785, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.694948520847878, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.6626359082695851, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.6949485250178733, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.6630998028133662, "name": "Spearman Max"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 46,829 |
aroot/eng-mya-wsample.43a
|
aroot
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-06T04:06:12Z |
2023-07-06T04:28:08+00:00
| 12 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-mya-wsample.43a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-wsample.43a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8306
- Bleu: 4.6779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-wsample.43a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8306
- Bleu: 4.6779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-mya-wsample.43a", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 46,830 |
cerebras/Cerebras-GPT-13B
|
cerebras
|
text-generation
|
[
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"causal-lm",
"text-generation",
"en",
"dataset:the_pile",
"arxiv:2304.03208",
"arxiv:2203.15556",
"arxiv:2101.00027",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | 2023-03-20T20:45:54Z |
2023-11-22T21:49:12+00:00
| 2,440 | 647 |
---
datasets:
- the_pile
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- pytorch
- causal-lm
inference: false
---
# Cerebras-GPT 13B
Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **13B** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-13B")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-13B")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible.
| null |
Non_BioNLP
|
# Cerebras-GPT 13B
Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **13B** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-13B")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-13B")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible.
|
{"datasets": ["the_pile"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["pytorch", "causal-lm"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 46,831 |
gaudi/opus-mt-tr-en-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-17T00:17:05Z |
2024-10-18T22:51:04+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-tr-en --output_dir ./ctranslate2/opus-mt-tr-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-tr-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-tr-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-tr-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-tr-en --output_dir ./ctranslate2/opus-mt-tr-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-tr-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-tr-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-tr-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 46,832 |
Helsinki-NLP/opus-mt-af-es
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:25:22+00:00
| 96 | 0 |
---
language:
- af
- es
license: apache-2.0
tags:
- translation
---
### afr-spa
* source group: Afrikaans
* target group: Spanish
* OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.spa | 49.9 | 0.680 |
### System Info:
- hf_name: afr-spa
- source_languages: afr
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'es']
- src_constituents: {'afr'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: spa
- short_pair: af-es
- chrF2_score: 0.68
- bleu: 49.9
- brevity_penalty: 1.0
- ref_len: 2783.0
- src_name: Afrikaans
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: es
- prefer_old: False
- long_pair: afr-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| null |
Non_BioNLP
|
### afr-spa
* source group: Afrikaans
* target group: Spanish
* OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.spa | 49.9 | 0.680 |
### System Info:
- hf_name: afr-spa
- source_languages: afr
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'es']
- src_constituents: {'afr'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: spa
- short_pair: af-es
- chrF2_score: 0.68
- bleu: 49.9
- brevity_penalty: 1.0
- ref_len: 2783.0
- src_name: Afrikaans
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: es
- prefer_old: False
- long_pair: afr-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
{"language": ["af", "es"], "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 46,833 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.