id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TheBloke/finance-LLM-GGUF
|
TheBloke
|
text-generation
|
[
"transformers",
"gguf",
"llama",
"finance",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:GAIR/lima",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"arxiv:2309.09530",
"base_model:AdaptLLM/finance-LLM",
"base_model:quantized:AdaptLLM/finance-LLM",
"license:other",
"region:us"
] | 2023-12-24T21:28:55Z |
2023-12-24T21:33:31+00:00
| 757 | 19 |
---
base_model: AdaptLLM/finance-LLM
datasets:
- Open-Orca/OpenOrca
- GAIR/lima
- WizardLM/WizardLM_evol_instruct_V2_196k
language:
- en
license: other
metrics:
- accuracy
model_name: Finance LLM
pipeline_tag: text-generation
tags:
- finance
inference: false
model_creator: AdaptLLM
model_type: llama
prompt_template: '[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Finance LLM - GGUF
- Model creator: [AdaptLLM](https://huggingface.co/AdaptLLM)
- Original model: [Finance LLM](https://huggingface.co/AdaptLLM/finance-LLM)
<!-- description start -->
## Description
This repo contains GGUF format model files for [AdaptLLM's Finance LLM](https://huggingface.co/AdaptLLM/finance-LLM).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/finance-LLM-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/finance-LLM-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/finance-LLM-GGUF)
* [AdaptLLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AdaptLLM/finance-LLM)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [finance-llm.Q2_K.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [finance-llm.Q3_K_S.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [finance-llm.Q3_K_M.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [finance-llm.Q3_K_L.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [finance-llm.Q4_0.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [finance-llm.Q4_K_S.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [finance-llm.Q4_K_M.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [finance-llm.Q5_0.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [finance-llm.Q5_K_S.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [finance-llm.Q5_K_M.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [finance-llm.Q6_K.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [finance-llm.Q8_0.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/finance-LLM-GGUF and below it, a specific filename to download, such as: finance-llm.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/finance-LLM-GGUF finance-llm.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/finance-LLM-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/finance-LLM-GGUF finance-llm.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m finance-llm.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./finance-llm.Q4_K_M.gguf", # Download the model file first
n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./finance-llm.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: AdaptLLM's Finance LLM
# Adapt (Large) Language Models to Domains
This repo contains the domain-specific base model developed from **LLaMA-1-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗
**************************** **Updates** ****************************
* 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/finance-LLM-13B) developed from LLaMA-1-13B.
* 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/finance-chat) developed from LLaMA-2-Chat-7B.
* 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [base models](https://huggingface.co/AdaptLLM/finance-LLM) developed from LLaMA-1-7B.
## Domain-Specific LLaMA-1
### LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="https://hf.fast360.xyz/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
</p>
### LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
## Domain-Specific LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
For example, to chat with the finance model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("AdaptLLM/finance-chat")
tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/finance-chat", use_fast=False)
# Put your input here:
user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
MMM Chicago Stock Exchange, Inc.
1.500% Notes due 2026 MMM26 New York Stock Exchange
1.750% Notes due 2030 MMM30 New York Stock Exchange
1.500% Notes due 2031 MMM31 New York Stock Exchange
Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?'''
# We use the prompt template of LLaMA-2-Chat demo
prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=4096)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
```
## Domain-Specific Tasks
To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
## Citation
If you find our work helpful, please cite us:
```bibtex
@article{adaptllm,
title = {Adapting Large Language Models via Reading Comprehension},
author = {Daixuan Cheng and Shaohan Huang and Furu Wei},
journal = {CoRR},
volume = {abs/2309.09530},
year = {2023}
}
```
<!-- original-model-card end -->
| null |
Non_BioNLP
|
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Finance LLM - GGUF
- Model creator: [AdaptLLM](https://huggingface.co/AdaptLLM)
- Original model: [Finance LLM](https://huggingface.co/AdaptLLM/finance-LLM)
<!-- description start -->
## Description
This repo contains GGUF format model files for [AdaptLLM's Finance LLM](https://huggingface.co/AdaptLLM/finance-LLM).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/finance-LLM-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/finance-LLM-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/finance-LLM-GGUF)
* [AdaptLLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AdaptLLM/finance-LLM)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [finance-llm.Q2_K.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [finance-llm.Q3_K_S.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [finance-llm.Q3_K_M.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [finance-llm.Q3_K_L.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [finance-llm.Q4_0.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [finance-llm.Q4_K_S.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [finance-llm.Q4_K_M.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [finance-llm.Q5_0.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [finance-llm.Q5_K_S.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [finance-llm.Q5_K_M.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [finance-llm.Q6_K.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [finance-llm.Q8_0.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/finance-LLM-GGUF and below it, a specific filename to download, such as: finance-llm.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/finance-LLM-GGUF finance-llm.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/finance-LLM-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/finance-LLM-GGUF finance-llm.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m finance-llm.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./finance-llm.Q4_K_M.gguf", # Download the model file first
n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./finance-llm.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: AdaptLLM's Finance LLM
# Adapt (Large) Language Models to Domains
This repo contains the domain-specific base model developed from **LLaMA-1-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗
**************************** **Updates** ****************************
* 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/finance-LLM-13B) developed from LLaMA-1-13B.
* 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/finance-chat) developed from LLaMA-2-Chat-7B.
* 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [base models](https://huggingface.co/AdaptLLM/finance-LLM) developed from LLaMA-1-7B.
## Domain-Specific LLaMA-1
### LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="https://hf.fast360.xyz/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
</p>
### LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
## Domain-Specific LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
For example, to chat with the finance model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("AdaptLLM/finance-chat")
tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/finance-chat", use_fast=False)
# Put your input here:
user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
MMM Chicago Stock Exchange, Inc.
1.500% Notes due 2026 MMM26 New York Stock Exchange
1.750% Notes due 2030 MMM30 New York Stock Exchange
1.500% Notes due 2031 MMM31 New York Stock Exchange
Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?'''
# We use the prompt template of LLaMA-2-Chat demo
prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=4096)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
```
## Domain-Specific Tasks
To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
## Citation
If you find our work helpful, please cite us:
```bibtex
@article{adaptllm,
title = {Adapting Large Language Models via Reading Comprehension},
author = {Daixuan Cheng and Shaohan Huang and Furu Wei},
journal = {CoRR},
volume = {abs/2309.09530},
year = {2023}
}
```
<!-- original-model-card end -->
|
{"base_model": "AdaptLLM/finance-LLM", "datasets": ["Open-Orca/OpenOrca", "GAIR/lima", "WizardLM/WizardLM_evol_instruct_V2_196k"], "language": ["en"], "license": "other", "metrics": ["accuracy"], "model_name": "Finance LLM", "pipeline_tag": "text-generation", "tags": ["finance"], "inference": false, "model_creator": "AdaptLLM", "model_type": "llama", "prompt_template": "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]\n", "quantized_by": "TheBloke"}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,834 |
anismahmahi/G2_replace_Whata_repetition_with_noPropaganda_SetFit
|
anismahmahi
|
text-classification
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | 2024-01-07T13:33:28Z |
2024-01-07T13:33:55+00:00
| 3 | 0 |
---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Fox News, The Washington Post, NBC News, The Associated Press and the Los
Angeles Times are among the entities that have said they will file amicus briefs
on behalf of CNN.
- text: 'Tommy Robinson is in prison today because he violated a court order demanding
that he not film videos outside the trials of Muslim rape gangs.
'
- text: As I wrote during the presidential campaign, Trump has no idea of Washington
and no idea who to appoint who would support him rather than work against him.
- text: IN MAY 2013, the Washington Post’s Greg Miller reported that the head of the
CIA’s clandestine service was being shifted out of that position as a result of
“a management shake-up” by then-Director John Brennan.
- text: Columbus police are investigating the shootings.
inference: false
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.602089552238806
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6021 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("anismahmahi/G2_replace_Whata_repetition_with_noPropaganda_SetFit")
# Run inference
preds = model("Columbus police are investigating the shootings.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 23.1093 | 129 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.3592 | - |
| 0.0121 | 50 | 0.2852 | - |
| 0.0243 | 100 | 0.2694 | - |
| 0.0364 | 150 | 0.2182 | - |
| 0.0486 | 200 | 0.2224 | - |
| 0.0607 | 250 | 0.2634 | - |
| 0.0729 | 300 | 0.2431 | - |
| 0.0850 | 350 | 0.2286 | - |
| 0.0971 | 400 | 0.197 | - |
| 0.1093 | 450 | 0.2466 | - |
| 0.1214 | 500 | 0.2374 | - |
| 0.1336 | 550 | 0.2134 | - |
| 0.1457 | 600 | 0.2092 | - |
| 0.1578 | 650 | 0.1987 | - |
| 0.1700 | 700 | 0.2288 | - |
| 0.1821 | 750 | 0.1562 | - |
| 0.1943 | 800 | 0.27 | - |
| 0.2064 | 850 | 0.1314 | - |
| 0.2186 | 900 | 0.2144 | - |
| 0.2307 | 950 | 0.184 | - |
| 0.2428 | 1000 | 0.2069 | - |
| 0.2550 | 1050 | 0.1773 | - |
| 0.2671 | 1100 | 0.0704 | - |
| 0.2793 | 1150 | 0.1139 | - |
| 0.2914 | 1200 | 0.2398 | - |
| 0.3035 | 1250 | 0.0672 | - |
| 0.3157 | 1300 | 0.1321 | - |
| 0.3278 | 1350 | 0.0803 | - |
| 0.3400 | 1400 | 0.0589 | - |
| 0.3521 | 1450 | 0.0428 | - |
| 0.3643 | 1500 | 0.0886 | - |
| 0.3764 | 1550 | 0.0839 | - |
| 0.3885 | 1600 | 0.1843 | - |
| 0.4007 | 1650 | 0.0375 | - |
| 0.4128 | 1700 | 0.114 | - |
| 0.4250 | 1750 | 0.1264 | - |
| 0.4371 | 1800 | 0.0585 | - |
| 0.4492 | 1850 | 0.0586 | - |
| 0.4614 | 1900 | 0.0805 | - |
| 0.4735 | 1950 | 0.0686 | - |
| 0.4857 | 2000 | 0.0684 | - |
| 0.4978 | 2050 | 0.0803 | - |
| 0.5100 | 2100 | 0.076 | - |
| 0.5221 | 2150 | 0.0888 | - |
| 0.5342 | 2200 | 0.1091 | - |
| 0.5464 | 2250 | 0.038 | - |
| 0.5585 | 2300 | 0.0674 | - |
| 0.5707 | 2350 | 0.0562 | - |
| 0.5828 | 2400 | 0.0603 | - |
| 0.5949 | 2450 | 0.0669 | - |
| 0.6071 | 2500 | 0.0829 | - |
| 0.6192 | 2550 | 0.1442 | - |
| 0.6314 | 2600 | 0.0914 | - |
| 0.6435 | 2650 | 0.0357 | - |
| 0.6557 | 2700 | 0.0546 | - |
| 0.6678 | 2750 | 0.0748 | - |
| 0.6799 | 2800 | 0.0149 | - |
| 0.6921 | 2850 | 0.1067 | - |
| 0.7042 | 2900 | 0.0054 | - |
| 0.7164 | 2950 | 0.0878 | - |
| 0.7285 | 3000 | 0.0385 | - |
| 0.7407 | 3050 | 0.036 | - |
| 0.7528 | 3100 | 0.0902 | - |
| 0.7649 | 3150 | 0.0734 | - |
| 0.7771 | 3200 | 0.0369 | - |
| 0.7892 | 3250 | 0.0031 | - |
| 0.8014 | 3300 | 0.0113 | - |
| 0.8135 | 3350 | 0.0862 | - |
| 0.8256 | 3400 | 0.0549 | - |
| 0.8378 | 3450 | 0.0104 | - |
| 0.8499 | 3500 | 0.0072 | - |
| 0.8621 | 3550 | 0.0546 | - |
| 0.8742 | 3600 | 0.0579 | - |
| 0.8864 | 3650 | 0.0789 | - |
| 0.8985 | 3700 | 0.0711 | - |
| 0.9106 | 3750 | 0.0361 | - |
| 0.9228 | 3800 | 0.0292 | - |
| 0.9349 | 3850 | 0.0121 | - |
| 0.9471 | 3900 | 0.0066 | - |
| 0.9592 | 3950 | 0.0091 | - |
| 0.9713 | 4000 | 0.0027 | - |
| 0.9835 | 4050 | 0.0891 | - |
| 0.9956 | 4100 | 0.0186 | - |
| **1.0** | **4118** | **-** | **0.2746** |
| 1.0078 | 4150 | 0.0246 | - |
| 1.0199 | 4200 | 0.0154 | - |
| 1.0321 | 4250 | 0.0056 | - |
| 1.0442 | 4300 | 0.0343 | - |
| 1.0563 | 4350 | 0.0375 | - |
| 1.0685 | 4400 | 0.0106 | - |
| 1.0806 | 4450 | 0.0025 | - |
| 1.0928 | 4500 | 0.0425 | - |
| 1.1049 | 4550 | 0.0019 | - |
| 1.1170 | 4600 | 0.0014 | - |
| 1.1292 | 4650 | 0.0883 | - |
| 1.1413 | 4700 | 0.0176 | - |
| 1.1535 | 4750 | 0.0204 | - |
| 1.1656 | 4800 | 0.0011 | - |
| 1.1778 | 4850 | 0.005 | - |
| 1.1899 | 4900 | 0.0238 | - |
| 1.2020 | 4950 | 0.0362 | - |
| 1.2142 | 5000 | 0.0219 | - |
| 1.2263 | 5050 | 0.0487 | - |
| 1.2385 | 5100 | 0.0609 | - |
| 1.2506 | 5150 | 0.0464 | - |
| 1.2627 | 5200 | 0.0033 | - |
| 1.2749 | 5250 | 0.0087 | - |
| 1.2870 | 5300 | 0.0101 | - |
| 1.2992 | 5350 | 0.0529 | - |
| 1.3113 | 5400 | 0.0243 | - |
| 1.3235 | 5450 | 0.001 | - |
| 1.3356 | 5500 | 0.0102 | - |
| 1.3477 | 5550 | 0.0047 | - |
| 1.3599 | 5600 | 0.0034 | - |
| 1.3720 | 5650 | 0.0118 | - |
| 1.3842 | 5700 | 0.0742 | - |
| 1.3963 | 5750 | 0.0538 | - |
| 1.4085 | 5800 | 0.0162 | - |
| 1.4206 | 5850 | 0.0079 | - |
| 1.4327 | 5900 | 0.0027 | - |
| 1.4449 | 5950 | 0.0035 | - |
| 1.4570 | 6000 | 0.0581 | - |
| 1.4692 | 6050 | 0.0813 | - |
| 1.4813 | 6100 | 0.0339 | - |
| 1.4934 | 6150 | 0.0312 | - |
| 1.5056 | 6200 | 0.0323 | - |
| 1.5177 | 6250 | 0.0521 | - |
| 1.5299 | 6300 | 0.0016 | - |
| 1.5420 | 6350 | 0.0009 | - |
| 1.5542 | 6400 | 0.0967 | - |
| 1.5663 | 6450 | 0.0009 | - |
| 1.5784 | 6500 | 0.031 | - |
| 1.5906 | 6550 | 0.0114 | - |
| 1.6027 | 6600 | 0.0599 | - |
| 1.6149 | 6650 | 0.0416 | - |
| 1.6270 | 6700 | 0.0047 | - |
| 1.6391 | 6750 | 0.0234 | - |
| 1.6513 | 6800 | 0.0609 | - |
| 1.6634 | 6850 | 0.022 | - |
| 1.6756 | 6900 | 0.0042 | - |
| 1.6877 | 6950 | 0.0336 | - |
| 1.6999 | 7000 | 0.0592 | - |
| 1.7120 | 7050 | 0.0536 | - |
| 1.7241 | 7100 | 0.1198 | - |
| 1.7363 | 7150 | 0.1035 | - |
| 1.7484 | 7200 | 0.0549 | - |
| 1.7606 | 7250 | 0.027 | - |
| 1.7727 | 7300 | 0.0251 | - |
| 1.7848 | 7350 | 0.0225 | - |
| 1.7970 | 7400 | 0.0027 | - |
| 1.8091 | 7450 | 0.0309 | - |
| 1.8213 | 7500 | 0.024 | - |
| 1.8334 | 7550 | 0.0355 | - |
| 1.8456 | 7600 | 0.0239 | - |
| 1.8577 | 7650 | 0.0377 | - |
| 1.8698 | 7700 | 0.012 | - |
| 1.8820 | 7750 | 0.0233 | - |
| 1.8941 | 7800 | 0.0184 | - |
| 1.9063 | 7850 | 0.0022 | - |
| 1.9184 | 7900 | 0.0043 | - |
| 1.9305 | 7950 | 0.014 | - |
| 1.9427 | 8000 | 0.0083 | - |
| 1.9548 | 8050 | 0.0084 | - |
| 1.9670 | 8100 | 0.0009 | - |
| 1.9791 | 8150 | 0.002 | - |
| 1.9913 | 8200 | 0.0002 | - |
| 2.0 | 8236 | - | 0.2768 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6021 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("anismahmahi/G2_replace_Whata_repetition_with_noPropaganda_SetFit")
# Run inference
preds = model("Columbus police are investigating the shootings.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 23.1093 | 129 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.3592 | - |
| 0.0121 | 50 | 0.2852 | - |
| 0.0243 | 100 | 0.2694 | - |
| 0.0364 | 150 | 0.2182 | - |
| 0.0486 | 200 | 0.2224 | - |
| 0.0607 | 250 | 0.2634 | - |
| 0.0729 | 300 | 0.2431 | - |
| 0.0850 | 350 | 0.2286 | - |
| 0.0971 | 400 | 0.197 | - |
| 0.1093 | 450 | 0.2466 | - |
| 0.1214 | 500 | 0.2374 | - |
| 0.1336 | 550 | 0.2134 | - |
| 0.1457 | 600 | 0.2092 | - |
| 0.1578 | 650 | 0.1987 | - |
| 0.1700 | 700 | 0.2288 | - |
| 0.1821 | 750 | 0.1562 | - |
| 0.1943 | 800 | 0.27 | - |
| 0.2064 | 850 | 0.1314 | - |
| 0.2186 | 900 | 0.2144 | - |
| 0.2307 | 950 | 0.184 | - |
| 0.2428 | 1000 | 0.2069 | - |
| 0.2550 | 1050 | 0.1773 | - |
| 0.2671 | 1100 | 0.0704 | - |
| 0.2793 | 1150 | 0.1139 | - |
| 0.2914 | 1200 | 0.2398 | - |
| 0.3035 | 1250 | 0.0672 | - |
| 0.3157 | 1300 | 0.1321 | - |
| 0.3278 | 1350 | 0.0803 | - |
| 0.3400 | 1400 | 0.0589 | - |
| 0.3521 | 1450 | 0.0428 | - |
| 0.3643 | 1500 | 0.0886 | - |
| 0.3764 | 1550 | 0.0839 | - |
| 0.3885 | 1600 | 0.1843 | - |
| 0.4007 | 1650 | 0.0375 | - |
| 0.4128 | 1700 | 0.114 | - |
| 0.4250 | 1750 | 0.1264 | - |
| 0.4371 | 1800 | 0.0585 | - |
| 0.4492 | 1850 | 0.0586 | - |
| 0.4614 | 1900 | 0.0805 | - |
| 0.4735 | 1950 | 0.0686 | - |
| 0.4857 | 2000 | 0.0684 | - |
| 0.4978 | 2050 | 0.0803 | - |
| 0.5100 | 2100 | 0.076 | - |
| 0.5221 | 2150 | 0.0888 | - |
| 0.5342 | 2200 | 0.1091 | - |
| 0.5464 | 2250 | 0.038 | - |
| 0.5585 | 2300 | 0.0674 | - |
| 0.5707 | 2350 | 0.0562 | - |
| 0.5828 | 2400 | 0.0603 | - |
| 0.5949 | 2450 | 0.0669 | - |
| 0.6071 | 2500 | 0.0829 | - |
| 0.6192 | 2550 | 0.1442 | - |
| 0.6314 | 2600 | 0.0914 | - |
| 0.6435 | 2650 | 0.0357 | - |
| 0.6557 | 2700 | 0.0546 | - |
| 0.6678 | 2750 | 0.0748 | - |
| 0.6799 | 2800 | 0.0149 | - |
| 0.6921 | 2850 | 0.1067 | - |
| 0.7042 | 2900 | 0.0054 | - |
| 0.7164 | 2950 | 0.0878 | - |
| 0.7285 | 3000 | 0.0385 | - |
| 0.7407 | 3050 | 0.036 | - |
| 0.7528 | 3100 | 0.0902 | - |
| 0.7649 | 3150 | 0.0734 | - |
| 0.7771 | 3200 | 0.0369 | - |
| 0.7892 | 3250 | 0.0031 | - |
| 0.8014 | 3300 | 0.0113 | - |
| 0.8135 | 3350 | 0.0862 | - |
| 0.8256 | 3400 | 0.0549 | - |
| 0.8378 | 3450 | 0.0104 | - |
| 0.8499 | 3500 | 0.0072 | - |
| 0.8621 | 3550 | 0.0546 | - |
| 0.8742 | 3600 | 0.0579 | - |
| 0.8864 | 3650 | 0.0789 | - |
| 0.8985 | 3700 | 0.0711 | - |
| 0.9106 | 3750 | 0.0361 | - |
| 0.9228 | 3800 | 0.0292 | - |
| 0.9349 | 3850 | 0.0121 | - |
| 0.9471 | 3900 | 0.0066 | - |
| 0.9592 | 3950 | 0.0091 | - |
| 0.9713 | 4000 | 0.0027 | - |
| 0.9835 | 4050 | 0.0891 | - |
| 0.9956 | 4100 | 0.0186 | - |
| **1.0** | **4118** | **-** | **0.2746** |
| 1.0078 | 4150 | 0.0246 | - |
| 1.0199 | 4200 | 0.0154 | - |
| 1.0321 | 4250 | 0.0056 | - |
| 1.0442 | 4300 | 0.0343 | - |
| 1.0563 | 4350 | 0.0375 | - |
| 1.0685 | 4400 | 0.0106 | - |
| 1.0806 | 4450 | 0.0025 | - |
| 1.0928 | 4500 | 0.0425 | - |
| 1.1049 | 4550 | 0.0019 | - |
| 1.1170 | 4600 | 0.0014 | - |
| 1.1292 | 4650 | 0.0883 | - |
| 1.1413 | 4700 | 0.0176 | - |
| 1.1535 | 4750 | 0.0204 | - |
| 1.1656 | 4800 | 0.0011 | - |
| 1.1778 | 4850 | 0.005 | - |
| 1.1899 | 4900 | 0.0238 | - |
| 1.2020 | 4950 | 0.0362 | - |
| 1.2142 | 5000 | 0.0219 | - |
| 1.2263 | 5050 | 0.0487 | - |
| 1.2385 | 5100 | 0.0609 | - |
| 1.2506 | 5150 | 0.0464 | - |
| 1.2627 | 5200 | 0.0033 | - |
| 1.2749 | 5250 | 0.0087 | - |
| 1.2870 | 5300 | 0.0101 | - |
| 1.2992 | 5350 | 0.0529 | - |
| 1.3113 | 5400 | 0.0243 | - |
| 1.3235 | 5450 | 0.001 | - |
| 1.3356 | 5500 | 0.0102 | - |
| 1.3477 | 5550 | 0.0047 | - |
| 1.3599 | 5600 | 0.0034 | - |
| 1.3720 | 5650 | 0.0118 | - |
| 1.3842 | 5700 | 0.0742 | - |
| 1.3963 | 5750 | 0.0538 | - |
| 1.4085 | 5800 | 0.0162 | - |
| 1.4206 | 5850 | 0.0079 | - |
| 1.4327 | 5900 | 0.0027 | - |
| 1.4449 | 5950 | 0.0035 | - |
| 1.4570 | 6000 | 0.0581 | - |
| 1.4692 | 6050 | 0.0813 | - |
| 1.4813 | 6100 | 0.0339 | - |
| 1.4934 | 6150 | 0.0312 | - |
| 1.5056 | 6200 | 0.0323 | - |
| 1.5177 | 6250 | 0.0521 | - |
| 1.5299 | 6300 | 0.0016 | - |
| 1.5420 | 6350 | 0.0009 | - |
| 1.5542 | 6400 | 0.0967 | - |
| 1.5663 | 6450 | 0.0009 | - |
| 1.5784 | 6500 | 0.031 | - |
| 1.5906 | 6550 | 0.0114 | - |
| 1.6027 | 6600 | 0.0599 | - |
| 1.6149 | 6650 | 0.0416 | - |
| 1.6270 | 6700 | 0.0047 | - |
| 1.6391 | 6750 | 0.0234 | - |
| 1.6513 | 6800 | 0.0609 | - |
| 1.6634 | 6850 | 0.022 | - |
| 1.6756 | 6900 | 0.0042 | - |
| 1.6877 | 6950 | 0.0336 | - |
| 1.6999 | 7000 | 0.0592 | - |
| 1.7120 | 7050 | 0.0536 | - |
| 1.7241 | 7100 | 0.1198 | - |
| 1.7363 | 7150 | 0.1035 | - |
| 1.7484 | 7200 | 0.0549 | - |
| 1.7606 | 7250 | 0.027 | - |
| 1.7727 | 7300 | 0.0251 | - |
| 1.7848 | 7350 | 0.0225 | - |
| 1.7970 | 7400 | 0.0027 | - |
| 1.8091 | 7450 | 0.0309 | - |
| 1.8213 | 7500 | 0.024 | - |
| 1.8334 | 7550 | 0.0355 | - |
| 1.8456 | 7600 | 0.0239 | - |
| 1.8577 | 7650 | 0.0377 | - |
| 1.8698 | 7700 | 0.012 | - |
| 1.8820 | 7750 | 0.0233 | - |
| 1.8941 | 7800 | 0.0184 | - |
| 1.9063 | 7850 | 0.0022 | - |
| 1.9184 | 7900 | 0.0043 | - |
| 1.9305 | 7950 | 0.014 | - |
| 1.9427 | 8000 | 0.0083 | - |
| 1.9548 | 8050 | 0.0084 | - |
| 1.9670 | 8100 | 0.0009 | - |
| 1.9791 | 8150 | 0.002 | - |
| 1.9913 | 8200 | 0.0002 | - |
| 2.0 | 8236 | - | 0.2768 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Fox News, The Washington Post, NBC News, The Associated Press and the Los Angeles Times are among the entities that have said they will file amicus briefs on behalf of CNN."}, {"text": "Tommy Robinson is in prison today because he violated a court order demanding that he not film videos outside the trials of Muslim rape gangs.\n"}, {"text": "As I wrote during the presidential campaign, Trump has no idea of Washington and no idea who to appoint who would support him rather than work against him."}, {"text": "IN MAY 2013, the Washington Post’s Greg Miller reported that the head of the CIA’s clandestine service was being shifted out of that position as a result of “a management shake-up” by then-Director John Brennan."}, {"text": "Columbus police are investigating the shootings."}], "inference": false, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.602089552238806, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,835 |
MultiBertGunjanPatrick/multiberts-seed-4-100k
|
MultiBertGunjanPatrick
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-10-04T05:10:05+00:00
| 111 | 0 |
---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-4
---
# MultiBERTs Seed 4 Checkpoint 100k (uncased)
Seed 4 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-100k')
model = BertModel.from_pretrained("multiberts-seed-4-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| null |
Non_BioNLP
|
# MultiBERTs Seed 4 Checkpoint 100k (uncased)
Seed 4 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-100k')
model = BertModel.from_pretrained("multiberts-seed-4-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-4"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,836 |
DOSaAI/albanian-gpt2-large-120m-instruct-v0.1
|
DOSaAI
|
text-generation
|
[
"transformers",
"text-generation",
"sq",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-03-31T19:27:33Z |
2024-03-31T19:29:56+00:00
| 0 | 1 |
---
language:
- sq
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Albanian GPT-2
## Model Description
This model is a fine-tuned version of the GPT-2 model by [OpenAI](https://openai.com/) for Albanian text generation tasks. GPT-2 is a state-of-the-art natural language processing model developed by OpenAI. It is a variant of the GPT (Generative Pre-trained Transformer) model, pre-trained on a large corpus of English text data. This fine-tuned version has been trained on a custom dataset of Albanian text data and can generate coherent and contextually relevant text in Albanian.
## Intended Use
The model is intended for text generation tasks in Albanian and English. It can be used for various natural language processing tasks such as text completion, text summarization, dialogue generation, and more. It is particularly useful for generating creative and contextually relevant text in both Albanian and English.
## Training Data
The model has been fine-tuned on a custom dataset consisting of Albanian text data. The dataset used for fine-tuning includes a diverse range of text sources in Albanian to ensure the model's proficiency in generating high-quality text across different domains.
## Limitations and Biases
As with any machine learning model, this model may exhibit biases present in the training data. Additionally, while the model performs well on a wide range of text generation tasks in Albanian and English, it may not always produce contextually appropriate or grammatically correct output. Users should review and evaluate the generated text to ensure it meets their quality standards.
## Acknowledgments
- This model is based on the GPT-2 architecture developed by OpenAI.
- The fine-tuning process for this model was facilitated by the Hugging Face Transformers library.
## Contact Information
For any questions, feedback, or inquiries related to the model, please contact the model developer:
- Name: DOSaAI
- Email: [email protected]
| null |
Non_BioNLP
|
# Albanian GPT-2
## Model Description
This model is a fine-tuned version of the GPT-2 model by [OpenAI](https://openai.com/) for Albanian text generation tasks. GPT-2 is a state-of-the-art natural language processing model developed by OpenAI. It is a variant of the GPT (Generative Pre-trained Transformer) model, pre-trained on a large corpus of English text data. This fine-tuned version has been trained on a custom dataset of Albanian text data and can generate coherent and contextually relevant text in Albanian.
## Intended Use
The model is intended for text generation tasks in Albanian and English. It can be used for various natural language processing tasks such as text completion, text summarization, dialogue generation, and more. It is particularly useful for generating creative and contextually relevant text in both Albanian and English.
## Training Data
The model has been fine-tuned on a custom dataset consisting of Albanian text data. The dataset used for fine-tuning includes a diverse range of text sources in Albanian to ensure the model's proficiency in generating high-quality text across different domains.
## Limitations and Biases
As with any machine learning model, this model may exhibit biases present in the training data. Additionally, while the model performs well on a wide range of text generation tasks in Albanian and English, it may not always produce contextually appropriate or grammatically correct output. Users should review and evaluate the generated text to ensure it meets their quality standards.
## Acknowledgments
- This model is based on the GPT-2 architecture developed by OpenAI.
- The fine-tuning process for this model was facilitated by the Hugging Face Transformers library.
## Contact Information
For any questions, feedback, or inquiries related to the model, please contact the model developer:
- Name: DOSaAI
- Email: [email protected]
|
{"language": ["sq", "en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation"}
|
task
|
[
"SUMMARIZATION"
] | 46,837 |
Lvxue/distilled-mt5-small-1-0.5
|
Lvxue
|
text2text-generation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-12T02:06:37Z |
2022-08-12T03:22:00+00:00
| 11 | 0 |
---
datasets:
- wmt16
language:
- en
- ro
license: apache-2.0
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: distilled-mt5-small-1-0.5
results:
- task:
type: translation
name: Translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- type: bleu
value: 5.3917
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-1-0.5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8410
- Bleu: 5.3917
- Gen Len: 40.6103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-1-0.5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8410
- Bleu: 5.3917
- Gen Len: 40.6103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
{"datasets": ["wmt16"], "language": ["en", "ro"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilled-mt5-small-1-0.5", "results": [{"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "wmt16 ro-en", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 5.3917, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 46,838 |
aroot/wsample.49
|
aroot
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-04T23:03:25Z |
2023-07-05T00:41:23+00:00
| 8 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: wsample.49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsample.49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2266
- Bleu: 3.0080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsample.49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2266
- Bleu: 3.0080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "wsample.49", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 46,839 |
ronaldseoh/long-t5-local-base
|
ronaldseoh
| null |
[
"pytorch",
"jax",
"longt5",
"en",
"arxiv:2112.07916",
"arxiv:1912.08777",
"arxiv:1910.10683",
"license:apache-2.0",
"region:us"
] | 2024-09-20T02:08:58Z |
2023-01-24T17:08:34+00:00
| 9 | 0 |
---
language: en
license: apache-2.0
---
# LongT5 (local attention, base-sized model)
LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you.
### How to use
```python
from transformers import AutoTokenizer, LongT5Model
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-local-base")
model = LongT5Model.from_pretrained("google/long-t5-local-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{guo2021longt5,
title={LongT5: Efficient Text-To-Text Transformer for Long Sequences},
author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei},
journal={arXiv preprint arXiv:2112.07916},
year={2021}
}
```
| null |
Non_BioNLP
|
# LongT5 (local attention, base-sized model)
LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you.
### How to use
```python
from transformers import AutoTokenizer, LongT5Model
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-local-base")
model = LongT5Model.from_pretrained("google/long-t5-local-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{guo2021longt5,
title={LongT5: Efficient Text-To-Text Transformer for Long Sequences},
author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei},
journal={arXiv preprint arXiv:2112.07916},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 46,840 |
marbogusz/bert-multi-cased-squad_sv
|
marbogusz
|
question-answering
|
[
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-05-19T23:00:13+00:00
| 103 | 0 |
---
{}
---
Swedish bert multilingual model trained on a machine translated (MS neural translation) SQUAD 1.1 dataset
| null |
Non_BioNLP
|
Swedish bert multilingual model trained on a machine translated (MS neural translation) SQUAD 1.1 dataset
|
{}
|
task
|
[
"TRANSLATION"
] | 46,841 |
maastrichtlawtech/wizardlm-7b-v1.0-lleqa
|
maastrichtlawtech
|
text-generation
|
[
"peft",
"legal",
"text-generation",
"fr",
"dataset:maastrichtlawtech/lleqa",
"arxiv:2309.17050",
"license:apache-2.0",
"region:us"
] | 2023-09-28T16:04:51Z |
2023-10-03T09:44:44+00:00
| 4 | 3 |
---
datasets:
- maastrichtlawtech/lleqa
language:
- fr
library_name: peft
license: apache-2.0
metrics:
- rouge
- meteor
pipeline_tag: text-generation
tags:
- legal
inference: false
---
# wizardLM-7b-v1.0-lleqa
This is a [wizardlm-7b-v1.0](https://huggingface.co/WizardLM/WizardLM-7B-V1.0) model fine-tuned with [QLoRA](https://github.com/artidoro/qlora) for long-form legal question answering in **French**.
## Usage
```python
[...]
```
## Training
#### Data
We use the [Long-form Legal Question Answering (LLeQA)](https://huggingface.co/datasets/maastrichtlawtech/lleqa) dataset to fine-tune the model. LLeQA is a French native dataset for studying legal information retrieval and question answering. It consists of a knowledge corpus of 27,941 statutory articles collected from the Belgian legislation, and 1,868 legal questions posed by Belgian citizens and labeled by experienced jurists with a comprehensive answer rooted in relevant articles from the corpus.
#### Hyperparameters
We fine-tune the model through 4-bit QLoRA finetuning with an effective batch size of 8 for 10 epochs (i.e., 1.1K steps) using paged AdamW optimizer with default momentum parameters and constant learning rate schedule of 2e-4. We employ NormalFloat4 with double quantization for the base models and add LoRA adapters on all linear layers by setting r=16, alpha=32 while utilizing float16 as computation datatype. Additionally, we perform NTK-aware scaling of RoPE to extend the context window to 4096 tokens. Training takes around 7.5 hours to complete on a single Tesla V100 GPU with 32GBs of memory. More details can be found in this [paper](https://arxiv.org/abs/) and [repository](https://github.com/maastrichtlawtech/lleqa).
## Citation
```bibtex
@article{louis2023interpretable,
author = {Louis, Antoine and van Dijck, Gijs and Spanakis, Gerasimos},
title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},
journal = {CoRR},
volume = {abs/2309.17050},
year = {2023},
url = {https://arxiv.org/abs/2309.17050},
eprinttype = {arXiv},
eprint = {2309.17050},
}
```
| null |
Non_BioNLP
|
# wizardLM-7b-v1.0-lleqa
This is a [wizardlm-7b-v1.0](https://huggingface.co/WizardLM/WizardLM-7B-V1.0) model fine-tuned with [QLoRA](https://github.com/artidoro/qlora) for long-form legal question answering in **French**.
## Usage
```python
[...]
```
## Training
#### Data
We use the [Long-form Legal Question Answering (LLeQA)](https://huggingface.co/datasets/maastrichtlawtech/lleqa) dataset to fine-tune the model. LLeQA is a French native dataset for studying legal information retrieval and question answering. It consists of a knowledge corpus of 27,941 statutory articles collected from the Belgian legislation, and 1,868 legal questions posed by Belgian citizens and labeled by experienced jurists with a comprehensive answer rooted in relevant articles from the corpus.
#### Hyperparameters
We fine-tune the model through 4-bit QLoRA finetuning with an effective batch size of 8 for 10 epochs (i.e., 1.1K steps) using paged AdamW optimizer with default momentum parameters and constant learning rate schedule of 2e-4. We employ NormalFloat4 with double quantization for the base models and add LoRA adapters on all linear layers by setting r=16, alpha=32 while utilizing float16 as computation datatype. Additionally, we perform NTK-aware scaling of RoPE to extend the context window to 4096 tokens. Training takes around 7.5 hours to complete on a single Tesla V100 GPU with 32GBs of memory. More details can be found in this [paper](https://arxiv.org/abs/) and [repository](https://github.com/maastrichtlawtech/lleqa).
## Citation
```bibtex
@article{louis2023interpretable,
author = {Louis, Antoine and van Dijck, Gijs and Spanakis, Gerasimos},
title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},
journal = {CoRR},
volume = {abs/2309.17050},
year = {2023},
url = {https://arxiv.org/abs/2309.17050},
eprinttype = {arXiv},
eprint = {2309.17050},
}
```
|
{"datasets": ["maastrichtlawtech/lleqa"], "language": ["fr"], "library_name": "peft", "license": "apache-2.0", "metrics": ["rouge", "meteor"], "pipeline_tag": "text-generation", "tags": ["legal"], "inference": false}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,842 |
tmnam20/mdeberta-v3-base-vsfc-1
|
tmnam20
|
text-classification
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-16T08:44:54Z |
2024-01-16T08:47:32+00:00
| 4 | 0 |
---
base_model: microsoft/mdeberta-v3-base
datasets:
- tmnam20/VieGLUE
language:
- en
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mdeberta-v3-base-vsfc-1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tmnam20/VieGLUE/VSFC
type: tmnam20/VieGLUE
config: vsfc
split: validation
args: vsfc
metrics:
- type: accuracy
value: 0.950726468730259
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vsfc-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSFC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2229
- Accuracy: 0.9507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1695 | 1.4 | 500 | 0.2297 | 0.9425 |
| 0.1095 | 2.79 | 1000 | 0.2185 | 0.9482 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vsfc-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSFC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2229
- Accuracy: 0.9507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1695 | 1.4 | 500 | 0.2297 | 0.9425 |
| 0.1095 | 2.79 | 1000 | 0.2185 | 0.9482 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "microsoft/mdeberta-v3-base", "datasets": ["tmnam20/VieGLUE"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mdeberta-v3-base-vsfc-1", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tmnam20/VieGLUE/VSFC", "type": "tmnam20/VieGLUE", "config": "vsfc", "split": "validation", "args": "vsfc"}, "metrics": [{"type": "accuracy", "value": 0.950726468730259, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,843 |
Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF
|
Triangle104
|
text-generation
|
[
"transformers",
"gguf",
"language",
"granite-3.2",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:ibm-granite/granite-3.2-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.2-2b-instruct",
"license:apache-2.0",
"region:us",
"conversational"
] | 2025-02-28T13:19:41Z |
2025-02-28T13:21:09+00:00
| 18 | 0 |
---
base_model: ibm-granite/granite-3.2-2b-instruct
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- language
- granite-3.2
- llama-cpp
- gguf-my-repo
inference: false
---
# Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3.2-2b-instruct`](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct) for more details on the model.
---
Model Summary:
-
Granite-3.2-2B-Instruct is an 2-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of Granite-3.1-2B-Instruct,
it has been trained using a mix of permissively licensed open-source
datasets and internally generated synthetic data designed for reasoning
tasks. The model allows controllability of its thinking capability,
ensuring it is applied only when required.
Developers: Granite Team, IBM
Website: Granite Docs
Release Date: February 26th, 2025
License: Apache 2.0
Supported Languages:
-
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech,
Italian, Korean, Dutch, and Chinese. However, users may finetune this
Granite model for languages beyond these 12 languages.
Intended Use:
-
This model is designed to handle general instruction-following tasks and
can be integrated into AI assistants across various domains, including
business applications.
Capabilities
-
Thinking
Summarization
Text classification
Text extraction
Question-answering
Retrieval Augmented Generation (RAG)
Code related tasks
Function-calling tasks
Multilingual dialog use cases
Long-context tasks including long document/meeting summarization, long document QA, etc.
Generation:
-
This is a simple example of how to use Granite-3.2-2B-Instruct model.
Install the following libraries:
-
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
Then, copy the snippet from the section that is relevant for your use case.
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch
model_path="ibm-granite/granite-3.2-2b-instruct"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map=device,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path
)
conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}]
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
set_seed(42)
output = model.generate(
**input_ids,
max_new_tokens=8192,
)
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)
Example Outputs
thinking=True
Here is my thought process:
1. First, let's understand the composition of the initial 10 liters of 30% acid solution:
- Acid content = 30% of 10 liters = 0.30 * 10 = 3 liters
2. We want to end up with a 50% acid solution. Let's denote the volume of the 70% acid solution to be added as 'x' liters.
3. The amount of acid in the final mixture (50% solution) should equal the initial acid content plus the acid from the added 70% solution.
- Total acid in final mixture = 0.50 * (10 + x)
4. The equation representing the total acid content is:
3 (from initial solution) + 0.70 * x (from added solution) = 0.50 * (10 + x)
5. Now, we solve for 'x':
3 + 0.70x = 5 + 0.50x
0.70x - 0.50x = 5 - 3
0.20x = 2
x = 2 / 0.20
x = 10 liters
6. Therefore, 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution to achieve a 50% acid mixture.
Here is my response:
To solve this problem, we need to determine how much of a 70% acid solution must be added to a 10-liter solution that is already 30% acid to create a new mixture that is 50% acid. Here's how to calculate it:
1. **Initial Acid Content**:
- The initial 10-liter solution contains 3 liters of acid (since 30% of 10 liters).
2. **Desired Final Acid Content**:
- We want the final mixture to be 50% acid.
3. **Equation Setup**:
- Let 'x' be the volume (in liters) of the 70% acid solution to be added.
- The equation representing the total acid content in the final mixture (which is 10 liters from the initial solution plus 'x' liters of the added solution) should equal 50% of the total volume (10 + x).
- This gives us: 3 (acid from initial solution) + 0.70x (acid from added solution) = 0.50(10 + x)
4. **Solving the Equation**:
- First, simplify the right side of the equation: 0.50(10 + x) = 5 + 0.50x
- Now, the equation is: 3 + 0.70x = 5 + 0.50x
- Subtract 0.50x from both sides: 0.20x = 2
- Divide both sides by 0.20: x = 2 / 0.20
- x = 10 liters
5. **Conclusion**:
- To achieve a 50% acid mixture from the initial 30% solution, you need to add 10 liters of a 70% acid solution.
So, the final answer is that 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution.
thinking=False
To solve this problem, we'll use the concept of volume and concentration. Let's denote the amount of 70% acid solution we need to add as "x" liters.
First, let's find out how much acid is in the initial 10-liter solution:
Acid in initial solution = 30% of 10 liters = 0.30 * 10 = 3 liters
Now, we want to end up with a 50% acid solution in a total volume of (10 + x) liters. Let's denote the final volume as V.
Final acid concentration = 50%
Final acid amount = 50% of V = 0.50 * V
We know the initial acid amount and the final acid amount, so we can set up an equation:
Initial acid amount + Acid from added solution = Final acid amount
3 liters + (70% of x) = 0.50 * (10 + x)
Now, let's solve for x:
0.70x + 3 = 0.50 * 10 + 0.50x
0.70x - 0.50x = 0.50 * 10 - 3
0.20x = 5 - 3
0.20x = 2
x = 2 / 0.20
x = 10 liters
So, you need to add 10 liters of a 70% acid solution to the initial 10-liter 30% acid solution to achieve a 50% acid mixture.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -c 2048
```
| null |
Non_BioNLP
|
# Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3.2-2b-instruct`](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct) for more details on the model.
---
Model Summary:
-
Granite-3.2-2B-Instruct is an 2-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of Granite-3.1-2B-Instruct,
it has been trained using a mix of permissively licensed open-source
datasets and internally generated synthetic data designed for reasoning
tasks. The model allows controllability of its thinking capability,
ensuring it is applied only when required.
Developers: Granite Team, IBM
Website: Granite Docs
Release Date: February 26th, 2025
License: Apache 2.0
Supported Languages:
-
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech,
Italian, Korean, Dutch, and Chinese. However, users may finetune this
Granite model for languages beyond these 12 languages.
Intended Use:
-
This model is designed to handle general instruction-following tasks and
can be integrated into AI assistants across various domains, including
business applications.
Capabilities
-
Thinking
Summarization
Text classification
Text extraction
Question-answering
Retrieval Augmented Generation (RAG)
Code related tasks
Function-calling tasks
Multilingual dialog use cases
Long-context tasks including long document/meeting summarization, long document QA, etc.
Generation:
-
This is a simple example of how to use Granite-3.2-2B-Instruct model.
Install the following libraries:
-
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
Then, copy the snippet from the section that is relevant for your use case.
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch
model_path="ibm-granite/granite-3.2-2b-instruct"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map=device,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path
)
conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}]
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
set_seed(42)
output = model.generate(
**input_ids,
max_new_tokens=8192,
)
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)
Example Outputs
thinking=True
Here is my thought process:
1. First, let's understand the composition of the initial 10 liters of 30% acid solution:
- Acid content = 30% of 10 liters = 0.30 * 10 = 3 liters
2. We want to end up with a 50% acid solution. Let's denote the volume of the 70% acid solution to be added as 'x' liters.
3. The amount of acid in the final mixture (50% solution) should equal the initial acid content plus the acid from the added 70% solution.
- Total acid in final mixture = 0.50 * (10 + x)
4. The equation representing the total acid content is:
3 (from initial solution) + 0.70 * x (from added solution) = 0.50 * (10 + x)
5. Now, we solve for 'x':
3 + 0.70x = 5 + 0.50x
0.70x - 0.50x = 5 - 3
0.20x = 2
x = 2 / 0.20
x = 10 liters
6. Therefore, 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution to achieve a 50% acid mixture.
Here is my response:
To solve this problem, we need to determine how much of a 70% acid solution must be added to a 10-liter solution that is already 30% acid to create a new mixture that is 50% acid. Here's how to calculate it:
1. **Initial Acid Content**:
- The initial 10-liter solution contains 3 liters of acid (since 30% of 10 liters).
2. **Desired Final Acid Content**:
- We want the final mixture to be 50% acid.
3. **Equation Setup**:
- Let 'x' be the volume (in liters) of the 70% acid solution to be added.
- The equation representing the total acid content in the final mixture (which is 10 liters from the initial solution plus 'x' liters of the added solution) should equal 50% of the total volume (10 + x).
- This gives us: 3 (acid from initial solution) + 0.70x (acid from added solution) = 0.50(10 + x)
4. **Solving the Equation**:
- First, simplify the right side of the equation: 0.50(10 + x) = 5 + 0.50x
- Now, the equation is: 3 + 0.70x = 5 + 0.50x
- Subtract 0.50x from both sides: 0.20x = 2
- Divide both sides by 0.20: x = 2 / 0.20
- x = 10 liters
5. **Conclusion**:
- To achieve a 50% acid mixture from the initial 30% solution, you need to add 10 liters of a 70% acid solution.
So, the final answer is that 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution.
thinking=False
To solve this problem, we'll use the concept of volume and concentration. Let's denote the amount of 70% acid solution we need to add as "x" liters.
First, let's find out how much acid is in the initial 10-liter solution:
Acid in initial solution = 30% of 10 liters = 0.30 * 10 = 3 liters
Now, we want to end up with a 50% acid solution in a total volume of (10 + x) liters. Let's denote the final volume as V.
Final acid concentration = 50%
Final acid amount = 50% of V = 0.50 * V
We know the initial acid amount and the final acid amount, so we can set up an equation:
Initial acid amount + Acid from added solution = Final acid amount
3 liters + (70% of x) = 0.50 * (10 + x)
Now, let's solve for x:
0.70x + 3 = 0.50 * 10 + 0.50x
0.70x - 0.50x = 0.50 * 10 - 3
0.20x = 5 - 3
0.20x = 2
x = 2 / 0.20
x = 10 liters
So, you need to add 10 liters of a 70% acid solution to the initial 10-liter 30% acid solution to achieve a 50% acid mixture.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -c 2048
```
|
{"base_model": "ibm-granite/granite-3.2-2b-instruct", "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["language", "granite-3.2", "llama-cpp", "gguf-my-repo"], "inference": false}
|
task
|
[
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | 46,844 |
tcepi/sts_bertimbau
|
tcepi
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-23T13:36:44Z |
2024-10-23T13:37:17+00:00
| 7 | 0 |
---
base_model: neuralmind/bert-base-portuguese-cased
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer based on neuralmind/bert-base-portuguese-cased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) <!-- at revision 94d69c95f98f7d5b2a8700c420230ae10def0baa -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tcepi/sts_bertimbau")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.8.10
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on neuralmind/bert-base-portuguese-cased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) <!-- at revision 94d69c95f98f7d5b2a8700c420230ae10def0baa -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tcepi/sts_bertimbau")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.8.10
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "neuralmind/bert-base-portuguese-cased", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,845 |
proxectonos/Nos_MT-OpenNMT-es-gl
|
proxectonos
| null |
[
"gl",
"license:mit",
"region:us"
] | 2023-02-16T09:27:38Z |
2025-04-11T11:14:47+00:00
| 0 | 1 |
---
language:
- gl
license: mit
metrics:
- bleu (Gold1): 79.6
- bleu (Gold2): 43.3
- bleu (Flores): 21.8
- bleu (Test-suite): 74.3
---
**English text [here](https://huggingface.co/proxectonos/NOS-MT-OpenNMT-es-gl/blob/main/README_English.md)**
**Descrición do Modelo**
Modelo feito con OpenNMT-py 3.2 para o par español-galego utilizando unha arquitectura transformer. O modelo foi transformado para o formato da ctranslate2.
**Como traducir con este Modelo**
+ Instalar o [Python 3.9](https://www.python.org/downloads/release/python-390/)
+ Instalar o [ctranslate 3.2](https://github.com/OpenNMT/CTranslate2)
+ Traducir un input_text utilizando o modelo NOS-MT-es-gl co seguinte comando:
```bash
perl tokenizer.perl < input.txt > input.tok
```
```bash
subword_nmt.apply_bpe -c ./bpe/es.bpe < input.tok > input.bpe
```
```bash
python3 translate.py ./ct2-es-gl_12L input.bpe > output.txt
```
```bash
sed -i 's/@@ //g' output.txt
```
**Adestramento**
No adestramento, utilizamos córpora auténticos e sintéticos do [ProxectoNós](https://github.com/proxectonos/corpora). Os primeiros son córpora de traducións feitas directamente por tradutores humanos. É importante salientar que a pesar destes textos seren feitos por humanos, non están libres de erros lingüísticos. Os segundos son córpora de traducións español-portugués, que convertemos en español-galego a través da tradución automática portugués-galego con Opentrad/Apertium e transliteración para palabras fóra de vocabulario.
**Procedemento de adestramento**
+ Tokenización dos datasets feita co tokenizador (tokenizer.pl) de [linguakit](https://github.com/citiususc/Linguakit) que foi modificado para evitar o salto de liña por token do ficheiro orixinal.
+ O vocabulario BPE para os modelos foi xerado a través do script [learn_bpe.py](https://github.com/OpenNMT/OpenNMT-py/blob/master/tools/learn_bpe.py) da OpenNMT
+ Utilizando o .yaml deste repositorio pode replicar o proceso de adestramento. É preciso modificar os paths do ficheiro .yaml para a Open NMT saber onde ir buscar os textos. Após facer isto, pode do seguinte xeito comezar o proceso:
```bash
onmt_build_vocab -config bpe-es-gl_emb.yaml -n_sample 40000
onmt_train -config bpe-es-gl_emb.yaml
```
**Hiperparámetros**
Os parámetros usados para o desenvolvemento do modelo poden ser consultados directamente no mesmo ficheiro .yaml bpe-es-gl_emb.yaml
**Avaliación**
A avaliación BLEU dos modelos é feita cunha mistura de tests desenvolvidos internamente (gold1, gold2, test-suite) con outros datasets disponíbeis en galego (Flores).
| GOLD 1 | GOLD 2 | FLORES | TEST-SUITE|
| ------------- |:-------------:| -------:|----------:|
| 79.5 | 43.5 | 21.4 | 73.4 |
**Licenzas do Modelo**
MIT License
Copyright (c) 2023 Proxecto Nós
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
**Financiamento**
Esta investigación foi financiada polo proxecto "Nós: o galego na sociedade e economía da intelixencia artificial", resultado dun acordo entre a Xunta de Galicia e a Universidade de Santiago de Compostela, o que resultou no subsidio ED431G2019/04 da Consellaría de Educación, Universidade e Formación Profesional da Galiza, e polo Fondo Europeo de Desenvolvemento Rexional (programa ERDF/FEDER), e Grupos de Referencia: ED431C 2020/21.
**Citar este traballo**
Se utilizar este modelo no seu traballo, cite por favor así:
Daniel Bardanca Outeirinho, Pablo Gamallo Otero, Iria de-Dios-Flores, and José Ramom Pichel Campos. 2024.
Exploring the effects of vocabulary size in neural machine translation: Galician as a target language.
In Proceedings of the 16th International Conference on Computational Processing of Portuguese, pages 600–604,
Santiago de Compostela, Galiza. Association for Computational Lingustics.
| null |
Non_BioNLP
|
**English text [here](https://huggingface.co/proxectonos/NOS-MT-OpenNMT-es-gl/blob/main/README_English.md)**
**Descrición do Modelo**
Modelo feito con OpenNMT-py 3.2 para o par español-galego utilizando unha arquitectura transformer. O modelo foi transformado para o formato da ctranslate2.
**Como traducir con este Modelo**
+ Instalar o [Python 3.9](https://www.python.org/downloads/release/python-390/)
+ Instalar o [ctranslate 3.2](https://github.com/OpenNMT/CTranslate2)
+ Traducir un input_text utilizando o modelo NOS-MT-es-gl co seguinte comando:
```bash
perl tokenizer.perl < input.txt > input.tok
```
```bash
subword_nmt.apply_bpe -c ./bpe/es.bpe < input.tok > input.bpe
```
```bash
python3 translate.py ./ct2-es-gl_12L input.bpe > output.txt
```
```bash
sed -i 's/@@ //g' output.txt
```
**Adestramento**
No adestramento, utilizamos córpora auténticos e sintéticos do [ProxectoNós](https://github.com/proxectonos/corpora). Os primeiros son córpora de traducións feitas directamente por tradutores humanos. É importante salientar que a pesar destes textos seren feitos por humanos, non están libres de erros lingüísticos. Os segundos son córpora de traducións español-portugués, que convertemos en español-galego a través da tradución automática portugués-galego con Opentrad/Apertium e transliteración para palabras fóra de vocabulario.
**Procedemento de adestramento**
+ Tokenización dos datasets feita co tokenizador (tokenizer.pl) de [linguakit](https://github.com/citiususc/Linguakit) que foi modificado para evitar o salto de liña por token do ficheiro orixinal.
+ O vocabulario BPE para os modelos foi xerado a través do script [learn_bpe.py](https://github.com/OpenNMT/OpenNMT-py/blob/master/tools/learn_bpe.py) da OpenNMT
+ Utilizando o .yaml deste repositorio pode replicar o proceso de adestramento. É preciso modificar os paths do ficheiro .yaml para a Open NMT saber onde ir buscar os textos. Após facer isto, pode do seguinte xeito comezar o proceso:
```bash
onmt_build_vocab -config bpe-es-gl_emb.yaml -n_sample 40000
onmt_train -config bpe-es-gl_emb.yaml
```
**Hiperparámetros**
Os parámetros usados para o desenvolvemento do modelo poden ser consultados directamente no mesmo ficheiro .yaml bpe-es-gl_emb.yaml
**Avaliación**
A avaliación BLEU dos modelos é feita cunha mistura de tests desenvolvidos internamente (gold1, gold2, test-suite) con outros datasets disponíbeis en galego (Flores).
| GOLD 1 | GOLD 2 | FLORES | TEST-SUITE|
| ------------- |:-------------:| -------:|----------:|
| 79.5 | 43.5 | 21.4 | 73.4 |
**Licenzas do Modelo**
MIT License
Copyright (c) 2023 Proxecto Nós
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
**Financiamento**
This model was developed within the Nós Project, funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the [project ILENIA] (https://proyectoilenia.es/) with reference 2022/TL22/00215336.
**Citar este traballo**
Se utilizar este modelo no seu traballo, cite por favor así:
Daniel Bardanca Outeirinho, Pablo Gamallo Otero, Iria de-Dios-Flores, and José Ramom Pichel Campos. 2024.
Exploring the effects of vocabulary size in neural machine translation: Galician as a target language.
In Proceedings of the 16th International Conference on Computational Processing of Portuguese, pages 600–604,
Santiago de Compostela, Galiza. Association for Computational Lingustics.
|
{"language": ["gl"], "license": "mit", "metrics": [{"bleu (Gold1)": 79.6}, {"bleu (Gold2)": 43.3}, {"bleu (Flores)": 21.8}, {"bleu (Test-suite)": 74.3}]}
|
task
|
[
"TRANSLATION"
] | 46,846 |
chunwoolee0/seqcls_mrpc_bert_base_uncased_model
|
chunwoolee0
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-14T23:27:51Z |
2023-07-14T23:32:36+00:00
| 8 | 0 |
---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: seqcls_mrpc_bert_base_uncased_model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- type: accuracy
value: 0.8014705882352942
name: Accuracy
- type: f1
value: 0.8669950738916257
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# seqcls_mrpc_bert_base_uncased_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4621
- Accuracy: 0.8015
- F1: 0.8670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 58 | 0.5442 | 0.7108 | 0.8228 |
| No log | 2.0 | 116 | 0.5079 | 0.7745 | 0.8558 |
| No log | 3.0 | 174 | 0.4621 | 0.8015 | 0.8670 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# seqcls_mrpc_bert_base_uncased_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4621
- Accuracy: 0.8015
- F1: 0.8670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 58 | 0.5442 | 0.7108 | 0.8228 |
| No log | 2.0 | 116 | 0.5079 | 0.7745 | 0.8558 |
| No log | 3.0 | 174 | 0.4621 | 0.8015 | 0.8670 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "seqcls_mrpc_bert_base_uncased_model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "mrpc", "split": "validation", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8014705882352942, "name": "Accuracy"}, {"type": "f1", "value": 0.8669950738916257, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,847 |
pierreguillou/bert-large-cased-squad-v1.1-portuguese
|
pierreguillou
|
question-answering
|
[
"transformers",
"pytorch",
"tf",
"bert",
"question-answering",
"bert-large",
"pt",
"dataset:brWaC",
"dataset:squad",
"dataset:squad_v1_pt",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2022-01-04T09:57:00+00:00
| 777 | 45 |
---
datasets:
- brWaC
- squad
- squad_v1_pt
language: pt
license: mit
metrics:
- squad
tags:
- question-answering
- bert
- bert-large
- pytorch
widget:
- text: Quando começou a pandemia de Covid-19 no mundo?
context: A pandemia de COVID-19, também conhecida como pandemia de coronavírus,
é uma pandemia em curso de COVID-19, uma doença respiratória causada pelo coronavírus
da síndrome respiratória aguda grave 2 (SARS-CoV-2). O vírus tem origem zoonótica
e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na
China.
- text: Onde foi descoberta a Covid-19?
context: A pandemia de COVID-19, também conhecida como pandemia de coronavírus,
é uma pandemia em curso de COVID-19, uma doença respiratória causada pelo coronavírus
da síndrome respiratória aguda grave 2 (SARS-CoV-2). O vírus tem origem zoonótica
e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na
China.
---
# Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1

## Introduction
The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/).
The language model used is the [BERTimbau Large](https://huggingface.co/neuralmind/bert-large-portuguese-cased) (aka "bert-large-portuguese-cased") from [Neuralmind.ai](https://neuralmind.ai/): BERTimbau is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
## Informations on the method used
All the informations are in the blog post : [NLP | Como treinar um modelo de Question Answering em qualquer linguagem baseado no BERT large, melhorando o desempenho do modelo utilizando o BERT base? (estudo de caso em português)](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96)
## Notebook in GitHub
[question_answering_BERT_large_cased_squad_v11_pt.ipynb](https://github.com/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb) ([nbviewer version](https://nbviewer.jupyter.org/github/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb))
## Performance
The results obtained are the following:
```
f1 = 84.43 (against 82.50 for the base model)
exact match = 72.68 (against 70.49 for the base model)
```
## How to use the model... with Pipeline
```python
import transformers
from transformers import pipeline
# source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19
context = r"""
A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19,
uma doença respiratória causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2).
O vírus tem origem zoonótica e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na China.
Em 20 de janeiro de 2020, a Organização Mundial da Saúde (OMS) classificou o surto
como Emergência de Saúde Pública de Âmbito Internacional e, em 11 de março de 2020, como pandemia.
Em 18 de junho de 2021, 177 349 274 casos foram confirmados em 192 países e territórios,
com 3 840 181 mortes atribuídas à doença, tornando-se uma das pandemias mais mortais da história.
Os sintomas de COVID-19 são altamente variáveis, variando de nenhum a doenças com risco de morte.
O vírus se espalha principalmente pelo ar quando as pessoas estão perto umas das outras.
Ele deixa uma pessoa infectada quando ela respira, tosse, espirra ou fala e entra em outra pessoa pela boca, nariz ou olhos.
Ele também pode se espalhar através de superfícies contaminadas.
As pessoas permanecem contagiosas por até duas semanas e podem espalhar o vírus mesmo se forem assintomáticas.
"""
model_name = 'pierreguillou/bert-large-cased-squad-v1.1-portuguese'
nlp = pipeline("question-answering", model=model_name)
question = "Quando começou a pandemia de Covid-19 no mundo?"
result = nlp(question=question, context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
# Answer: 'dezembro de 2019', score: 0.5087, start: 290, end: 306
```
## How to use the model... with the Auto classes
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese")
model = AutoModelForQuestionAnswering.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese")
```
Or just clone the model repo:
```python
git lfs install
git clone https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
## Limitations and bias
The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases.
## Author
Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations ([link to the list](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96#c2f5)). In particular: [Hugging Face](https://huggingface.co/), [Neuralmind.ai](https://neuralmind.ai/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) and [AI Lab](https://ailab.unb.br/).
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{pierreguillou2021bertlargecasedsquadv11portuguese,
title={Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1},
author={Pierre Guillou},
year={2021}
}
```
| null |
TBD
|
# Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1

## Introduction
The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/).
The language model used is the [BERTimbau Large](https://huggingface.co/neuralmind/bert-large-portuguese-cased) (aka "bert-large-portuguese-cased") from [Neuralmind.ai](https://neuralmind.ai/): BERTimbau is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
## Informations on the method used
All the informations are in the blog post : [NLP | Como treinar um modelo de Question Answering em qualquer linguagem baseado no BERT large, melhorando o desempenho do modelo utilizando o BERT base? (estudo de caso em português)](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96)
## Notebook in GitHub
[question_answering_BERT_large_cased_squad_v11_pt.ipynb](https://github.com/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb) ([nbviewer version](https://nbviewer.jupyter.org/github/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb))
## Performance
The results obtained are the following:
```
f1 = 84.43 (against 82.50 for the base model)
exact match = 72.68 (against 70.49 for the base model)
```
## How to use the model... with Pipeline
```python
import transformers
from transformers import pipeline
# source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19
context = r"""
A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19,
uma doença respiratória causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2).
O vírus tem origem zoonótica e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na China.
Em 20 de janeiro de 2020, a Organização Mundial da Saúde (OMS) classificou o surto
como Emergência de Saúde Pública de Âmbito Internacional e, em 11 de março de 2020, como pandemia.
Em 18 de junho de 2021, 177 349 274 casos foram confirmados em 192 países e territórios,
com 3 840 181 mortes atribuídas à doença, tornando-se uma das pandemias mais mortais da história.
Os sintomas de COVID-19 são altamente variáveis, variando de nenhum a doenças com risco de morte.
O vírus se espalha principalmente pelo ar quando as pessoas estão perto umas das outras.
Ele deixa uma pessoa infectada quando ela respira, tosse, espirra ou fala e entra em outra pessoa pela boca, nariz ou olhos.
Ele também pode se espalhar através de superfícies contaminadas.
As pessoas permanecem contagiosas por até duas semanas e podem espalhar o vírus mesmo se forem assintomáticas.
"""
model_name = 'pierreguillou/bert-large-cased-squad-v1.1-portuguese'
nlp = pipeline("question-answering", model=model_name)
question = "Quando começou a pandemia de Covid-19 no mundo?"
result = nlp(question=question, context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
# Answer: 'dezembro de 2019', score: 0.5087, start: 290, end: 306
```
## How to use the model... with the Auto classes
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese")
model = AutoModelForQuestionAnswering.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese")
```
Or just clone the model repo:
```python
git lfs install
git clone https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
## Limitations and bias
The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases.
## Author
Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations ([link to the list](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96#c2f5)). In particular: [Hugging Face](https://huggingface.co/), [Neuralmind.ai](https://neuralmind.ai/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) and [AI Lab](https://ailab.unb.br/).
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{pierreguillou2021bertlargecasedsquadv11portuguese,
title={Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1},
author={Pierre Guillou},
year={2021}
}
```
|
{"datasets": ["brWaC", "squad", "squad_v1_pt"], "language": "pt", "license": "mit", "metrics": ["squad"], "tags": ["question-answering", "bert", "bert-large", "pytorch"], "widget": [{"text": "Quando começou a pandemia de Covid-19 no mundo?", "context": "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). O vírus tem origem zoonótica e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na China."}, {"text": "Onde foi descoberta a Covid-19?", "context": "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). O vírus tem origem zoonótica e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na China."}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION",
"QUESTION_ANSWERING",
"TEXTUAL_ENTAILMENT"
] | 46,848 |
kunalr63/my_awesome_model
|
kunalr63
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-16T13:00:33Z |
2023-04-16T13:33:32+00:00
| 14 | 0 |
---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.92768
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1887
- Accuracy: 0.9277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2233 | 1.0 | 1563 | 0.1887 | 0.9277 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1887
- Accuracy: 0.9277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2233 | 1.0 | 1563 | 0.1887 | 0.9277 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "my_awesome_model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.92768, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,849 |
gaudi/opus-mt-fr-ht-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-22T15:57:36Z |
2024-10-19T04:26:33+00:00
| 9 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ht)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ht).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-ht --output_dir ./ctranslate2/opus-mt-fr-ht-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fr-ht-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fr-ht-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fr-ht-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ht) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ht)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ht).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-ht --output_dir ./ctranslate2/opus-mt-fr-ht-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fr-ht-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fr-ht-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fr-ht-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ht) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 46,850 |
RichardErkhov/EmergentMethods_-_Phi-3-mini-128k-instruct-graph-4bits
|
RichardErkhov
| null |
[
"safetensors",
"phi3",
"custom_code",
"4-bit",
"bitsandbytes",
"region:us"
] | 2025-01-18T08:48:37Z |
2025-01-18T08:50:48+00:00
| 29 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3-mini-128k-instruct-graph - bnb 4bits
- Model creator: https://huggingface.co/EmergentMethods/
- Original model: https://huggingface.co/EmergentMethods/Phi-3-mini-128k-instruct-graph/
Original model description:
---
license: cc-by-nc-sa-4.0
datasets:
- EmergentMethods/AskNews-GraphER-v0
tags:
- entity-relationship-extraction
- news-analysis
language:
- en
pipeline_tag: text-generation
---
# Model Card for Phi-3-mini-128k-instruct-graph
This model is a fine-tuned version of Microsoft's Phi-3-mini-128k-instruct, specifically tailored for entity relationship extraction from general text data. It aims to match the quality and accuracy of GPT-4 in generating entity relationship graphs while offering improved efficiency for large-scale processing.
## Model Details
- **Developed by:** [Emergent Methods](https://emergentmethods.ai/)
- **Funded by:** [Emergent Methods](https://emergentmethods.ai/)
- **Shared by:** [Emergent Methods](https://emergentmethods.ai/)
- **Model type:** microsoft/phi-3-mini-128k-instruct (fine-tuned)
- **Language(s):** English
- **License:** Creative Commons Attribution Non Commercial Share Alike 4.0
- **Finetuned from model:** [microsoft/phi-3-mini-128k-instruct](https://huggingface.co/microsoft/phi-3-mini-128k-instruct)
For more information, see our blog post.:
📰 [Blog](https://emergentmethods.medium.com/outperforming-claude-3-5-sonnet-with-phi-3-mini-4k-for-graph-entity-relationship-extraction-tasks-7c8f6c1ebd79) <br>
## Uses
This model is designed for generating structured JSON data representing entity relationships in general text data. It can be used for:
- Enhanced information retrieval in various text databases
- Exploration of temporal relationships and evolving narratives across different types of documents
- Advanced predictive modeling for trend analysis in diverse textual sources
The model is particularly useful for applications requiring high-throughput processing of large volumes of text data, such as content aggregation platforms, research databases, and comprehensive text analysis systems.
## Bias, Risks, and Limitations
Although the goal of the dataset is to reduce bias, and improve diversity, it is still biased to western languages and countries. This limitation originates from the abilities of Llama2 for the translation and summary generations. Further, any bias originating in Llama2 training data will also be present in this dataset, since Llama2 was used to summarize the open-web articles. Further, any biases present in Microsoft Phi-3 will be present in the present dataset.
## Training Details
- **Training Data:** Over 7,000 stories and updates from AskNews, curated to avoid topic overlap
- **Training Procedure:** Fine-tuned using the Transformers library, SFTTrainer, PEFT, and QLoRA
## Evaluation Results
Compared to GPT-4o (ground truth), Claude Sonnet 3.5, and the base Phi-3 model:
| Metric | Phi-3 Fine-tuned | Claude Sonnet 3.5 | Phi-3 (base) |
|:--------|:------------------|:-------------------|:--------------|
| Nodes Similarity | 0.78 | 0.64 | 0.64 |
| Edges Similarity | 0.49 | 0.41 | 0.30 |
| JSON Consistency | 0.99 | 0.97 | 0.96 |
| JSON Similarity | 0.75 | 0.67 | 0.63 |
## Environmental Impact
- **Hardware Type:** 1x A100 SXM
- **Hours used:** 3 hours
- **Carbon Emitted:** 0.44 kg (According to Machine Learning Impact calculator)
## How to Get Started with the Model (Sample inference code)
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"EmergentMethods/Phi-3-mini-128k-instruct-graph",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("EmergentMethods/Phi-3-mini-128k-instruct-graph")
messages = [
{"role": "system", "content": """
A chat between a curious user and an artificial intelligence Assistant. The Assistant is an expert at identifying entities and relationships in text. The Assistant responds in JSON output only.
The User provides text in the format:
-------Text begin-------
<User provided text>
-------Text end-------
The Assistant follows the following steps before replying to the User:
1. **identify the most important entities** The Assistant identifies the most important entities in the text. These entities are listed in the JSON output under the key "nodes", they follow the structure of a list of dictionaries where each dict is:
"nodes":[{"id": <entity N>, "type": <type>, "detailed_type": <detailed type>}, ...]
where "type": <type> is a broad categorization of the entity. "detailed type": <detailed_type> is a very descriptive categorization of the entity.
2. **determine relationships** The Assistant uses the text between -------Text begin------- and -------Text end------- to determine the relationships between the entities identified in the "nodes" list defined above. These relationships are called "edges" and they follow the structure of:
"edges":[{"from": <entity 1>, "to": <entity 2>, "label": <relationship>}, ...]
The <entity N> must correspond to the "id" of an entity in the "nodes" list.
The Assistant never repeats the same node twice. The Assistant never repeats the same edge twice.
The Assistant responds to the User in JSON only, according to the following JSON schema:
{"type":"object","properties":{"nodes":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string"},"type":{"type":"string"},"detailed_type":{"type":"string"}},"required":["id","type","detailed_type"],"additionalProperties":false}},"edges":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"},"label":{"type":"string"}},"required":["from","to","label"],"additionalProperties":false}}},"required":["nodes","edges"],"additionalProperties":false}
"""},
{"role": "user", "content": """
-------Text begin-------
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[4] As a leading organization in the ongoing AI boom,[5] OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora.[6][7] Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
-------Text end-------
"""}
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
# Output:
# {
# "nodes": [
# {
# "id": "OpenAI",
# "type": "organization",
# "detailed_type": "ai research organization"
# },
# {
# "id": "GPT family",
# "type": "technology",
# "detailed_type": "large language models"
# },
# {
# "id": "DALL-E series",
# "type": "technology",
# "detailed_type": "text-to-image models"
# },
# {
# "id": "Sora",
# "type": "technology",
# "detailed_type": "text-to-video model"
# },
# {
# "id": "ChatGPT",
# "type": "technology",
# "detailed_type": "generative ai"
# },
# {
# "id": "San Francisco",
# "type": "location",
# "detailed_type": "city"
# },
# {
# "id": "California",
# "type": "location",
# "detailed_type": "state"
# },
# {
# "id": "December 2015",
# "type": "date",
# "detailed_type": "foundation date"
# },
# {
# "id": "November 2022",
# "type": "date",
# "detailed_type": "release date"
# }
# ],
# "edges": [
# {
# "from": "OpenAI",
# "to": "San Francisco",
# "label": "headquartered in"
# },
# {
# "from": "San Francisco",
# "to": "California",
# "label": "located in"
# },
# {
# "from": "OpenAI",
# "to": "December 2015",
# "label": "founded in"
# },
# {
# "from": "OpenAI",
# "to": "GPT family",
# "label": "developed"
# },
# {
# "from": "OpenAI",
# "to": "DALL-E series",
# "label": "developed"
# },
# {
# "from": "OpenAI",
# "to": "Sora",
# "label": "developed"
# },
# {
# "from": "OpenAI",
# "to": "ChatGPT",
# "label": "released"
# },
# {
# "from": "ChatGPT",
# "to": "November 2022",
# "label": "released in"
# }
# ]
# }
```
## Ethical Considerations
Users should be aware that this model is designed for entity relationship extraction from general text data and may not be suitable for other domains without further fine-tuning. The model's outputs should be reviewed and validated, especially when used in applications that may impact decision-making or public information dissemination.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3-mini-128k-instruct-graph - bnb 4bits
- Model creator: https://huggingface.co/EmergentMethods/
- Original model: https://huggingface.co/EmergentMethods/Phi-3-mini-128k-instruct-graph/
Original model description:
---
license: cc-by-nc-sa-4.0
datasets:
- EmergentMethods/AskNews-GraphER-v0
tags:
- entity-relationship-extraction
- news-analysis
language:
- en
pipeline_tag: text-generation
---
# Model Card for Phi-3-mini-128k-instruct-graph
This model is a fine-tuned version of Microsoft's Phi-3-mini-128k-instruct, specifically tailored for entity relationship extraction from general text data. It aims to match the quality and accuracy of GPT-4 in generating entity relationship graphs while offering improved efficiency for large-scale processing.
## Model Details
- **Developed by:** [Emergent Methods](https://emergentmethods.ai/)
- **Funded by:** [Emergent Methods](https://emergentmethods.ai/)
- **Shared by:** [Emergent Methods](https://emergentmethods.ai/)
- **Model type:** microsoft/phi-3-mini-128k-instruct (fine-tuned)
- **Language(s):** English
- **License:** Creative Commons Attribution Non Commercial Share Alike 4.0
- **Finetuned from model:** [microsoft/phi-3-mini-128k-instruct](https://huggingface.co/microsoft/phi-3-mini-128k-instruct)
For more information, see our blog post.:
📰 [Blog](https://emergentmethods.medium.com/outperforming-claude-3-5-sonnet-with-phi-3-mini-4k-for-graph-entity-relationship-extraction-tasks-7c8f6c1ebd79) <br>
## Uses
This model is designed for generating structured JSON data representing entity relationships in general text data. It can be used for:
- Enhanced information retrieval in various text databases
- Exploration of temporal relationships and evolving narratives across different types of documents
- Advanced predictive modeling for trend analysis in diverse textual sources
The model is particularly useful for applications requiring high-throughput processing of large volumes of text data, such as content aggregation platforms, research databases, and comprehensive text analysis systems.
## Bias, Risks, and Limitations
Although the goal of the dataset is to reduce bias, and improve diversity, it is still biased to western languages and countries. This limitation originates from the abilities of Llama2 for the translation and summary generations. Further, any bias originating in Llama2 training data will also be present in this dataset, since Llama2 was used to summarize the open-web articles. Further, any biases present in Microsoft Phi-3 will be present in the present dataset.
## Training Details
- **Training Data:** Over 7,000 stories and updates from AskNews, curated to avoid topic overlap
- **Training Procedure:** Fine-tuned using the Transformers library, SFTTrainer, PEFT, and QLoRA
## Evaluation Results
Compared to GPT-4o (ground truth), Claude Sonnet 3.5, and the base Phi-3 model:
| Metric | Phi-3 Fine-tuned | Claude Sonnet 3.5 | Phi-3 (base) |
|:--------|:------------------|:-------------------|:--------------|
| Nodes Similarity | 0.78 | 0.64 | 0.64 |
| Edges Similarity | 0.49 | 0.41 | 0.30 |
| JSON Consistency | 0.99 | 0.97 | 0.96 |
| JSON Similarity | 0.75 | 0.67 | 0.63 |
## Environmental Impact
- **Hardware Type:** 1x A100 SXM
- **Hours used:** 3 hours
- **Carbon Emitted:** 0.44 kg (According to Machine Learning Impact calculator)
## How to Get Started with the Model (Sample inference code)
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"EmergentMethods/Phi-3-mini-128k-instruct-graph",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("EmergentMethods/Phi-3-mini-128k-instruct-graph")
messages = [
{"role": "system", "content": """
A chat between a curious user and an artificial intelligence Assistant. The Assistant is an expert at identifying entities and relationships in text. The Assistant responds in JSON output only.
The User provides text in the format:
-------Text begin-------
<User provided text>
-------Text end-------
The Assistant follows the following steps before replying to the User:
1. **identify the most important entities** The Assistant identifies the most important entities in the text. These entities are listed in the JSON output under the key "nodes", they follow the structure of a list of dictionaries where each dict is:
"nodes":[{"id": <entity N>, "type": <type>, "detailed_type": <detailed type>}, ...]
where "type": <type> is a broad categorization of the entity. "detailed type": <detailed_type> is a very descriptive categorization of the entity.
2. **determine relationships** The Assistant uses the text between -------Text begin------- and -------Text end------- to determine the relationships between the entities identified in the "nodes" list defined above. These relationships are called "edges" and they follow the structure of:
"edges":[{"from": <entity 1>, "to": <entity 2>, "label": <relationship>}, ...]
The <entity N> must correspond to the "id" of an entity in the "nodes" list.
The Assistant never repeats the same node twice. The Assistant never repeats the same edge twice.
The Assistant responds to the User in JSON only, according to the following JSON schema:
{"type":"object","properties":{"nodes":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string"},"type":{"type":"string"},"detailed_type":{"type":"string"}},"required":["id","type","detailed_type"],"additionalProperties":false}},"edges":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"},"label":{"type":"string"}},"required":["from","to","label"],"additionalProperties":false}}},"required":["nodes","edges"],"additionalProperties":false}
"""},
{"role": "user", "content": """
-------Text begin-------
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[4] As a leading organization in the ongoing AI boom,[5] OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora.[6][7] Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
-------Text end-------
"""}
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
# Output:
# {
# "nodes": [
# {
# "id": "OpenAI",
# "type": "organization",
# "detailed_type": "ai research organization"
# },
# {
# "id": "GPT family",
# "type": "technology",
# "detailed_type": "large language models"
# },
# {
# "id": "DALL-E series",
# "type": "technology",
# "detailed_type": "text-to-image models"
# },
# {
# "id": "Sora",
# "type": "technology",
# "detailed_type": "text-to-video model"
# },
# {
# "id": "ChatGPT",
# "type": "technology",
# "detailed_type": "generative ai"
# },
# {
# "id": "San Francisco",
# "type": "location",
# "detailed_type": "city"
# },
# {
# "id": "California",
# "type": "location",
# "detailed_type": "state"
# },
# {
# "id": "December 2015",
# "type": "date",
# "detailed_type": "foundation date"
# },
# {
# "id": "November 2022",
# "type": "date",
# "detailed_type": "release date"
# }
# ],
# "edges": [
# {
# "from": "OpenAI",
# "to": "San Francisco",
# "label": "headquartered in"
# },
# {
# "from": "San Francisco",
# "to": "California",
# "label": "located in"
# },
# {
# "from": "OpenAI",
# "to": "December 2015",
# "label": "founded in"
# },
# {
# "from": "OpenAI",
# "to": "GPT family",
# "label": "developed"
# },
# {
# "from": "OpenAI",
# "to": "DALL-E series",
# "label": "developed"
# },
# {
# "from": "OpenAI",
# "to": "Sora",
# "label": "developed"
# },
# {
# "from": "OpenAI",
# "to": "ChatGPT",
# "label": "released"
# },
# {
# "from": "ChatGPT",
# "to": "November 2022",
# "label": "released in"
# }
# ]
# }
```
## Ethical Considerations
Users should be aware that this model is designed for entity relationship extraction from general text data and may not be suitable for other domains without further fine-tuning. The model's outputs should be reviewed and validated, especially when used in applications that may impact decision-making or public information dissemination.
|
{}
|
task
|
[
"TRANSLATION"
] | 46,851 |
HusseinEid/bert-finetuned-ner
|
HusseinEid
|
token-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-18T15:16:47Z |
2024-05-18T15:35:40+00:00
| 9 | 0 |
---
base_model: bert-base-cased
datasets:
- conll2003
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- type: precision
value: 0.9342824035755669
name: Precision
- type: recall
value: 0.9498485358465163
name: Recall
- type: f1
value: 0.9420011683217892
name: F1
- type: accuracy
value: 0.9861217401542356
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0636
- Precision: 0.9343
- Recall: 0.9498
- F1: 0.9420
- Accuracy: 0.9861
## Model description
This is a model for Named entity recognition NER
## Intended uses & limitations
Open source
## Training and evaluation data
The conll2003 dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0757 | 1.0 | 1756 | 0.0638 | 0.9215 | 0.9362 | 0.9288 | 0.9833 |
| 0.0352 | 2.0 | 3512 | 0.0667 | 0.9360 | 0.9482 | 0.9421 | 0.9858 |
| 0.0215 | 3.0 | 5268 | 0.0636 | 0.9343 | 0.9498 | 0.9420 | 0.9861 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0636
- Precision: 0.9343
- Recall: 0.9498
- F1: 0.9420
- Accuracy: 0.9861
## Model description
This is a model for Named entity recognition NER
## Intended uses & limitations
Open source
## Training and evaluation data
The conll2003 dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0757 | 1.0 | 1756 | 0.0638 | 0.9215 | 0.9362 | 0.9288 | 0.9833 |
| 0.0352 | 2.0 | 3512 | 0.0667 | 0.9360 | 0.9482 | 0.9421 | 0.9858 |
| 0.0215 | 3.0 | 5268 | 0.0636 | 0.9343 | 0.9498 | 0.9420 | 0.9861 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
{"base_model": "bert-base-cased", "datasets": ["conll2003"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "config": "conll2003", "split": "validation", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9342824035755669, "name": "Precision"}, {"type": "recall", "value": 0.9498485358465163, "name": "Recall"}, {"type": "f1", "value": 0.9420011683217892, "name": "F1"}, {"type": "accuracy", "value": 0.9861217401542356, "name": "Accuracy"}]}]}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 46,852 |
Tasm/autotrain-esdxq-2v2zh
|
Tasm
|
text-classification
|
[
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-classification",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"region:us"
] | 2024-11-19T17:14:37Z |
2024-11-19T17:26:01+00:00
| 5 | 0 |
---
base_model: google-bert/bert-base-multilingual-cased
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.0839352235198021
f1: 0.8888888888888888
precision: 1.0
recall: 0.8
auc: 0.8300000000000001
accuracy: 0.9846153846153847
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.0839352235198021
f1: 0.8888888888888888
precision: 1.0
recall: 0.8
auc: 0.8300000000000001
accuracy: 0.9846153846153847
|
{"base_model": "google-bert/bert-base-multilingual-cased", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,853 |
ns0911/klue-roberta-base-klue-sts
|
ns0911
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10501",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-13T00:27:58Z |
2025-01-13T00:28:18+00:00
| 6 | 0 |
---
base_model: klue/roberta-base
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10501
- loss:CosineSimilarityLoss
widget:
- source_sentence: 차로 15분 거리에 베네치아 로마광장에 도착할 수 있습니다.
sentences:
- 베니스 로마 광장까지 차로 15분 걸립니다.
- 정말 너무나도 깨끗하고 편안한 숙소입니다.
- 처음에 집찾기가 조금 힘들었지만 집이 있는 골목까지는 가기 쉬워요!
- source_sentence: 다음번 대만 방문시 꼭 다시 오겠습니다.
sentences:
- 오늘 날씨는 비가 올까? 아니면 맑을까?
- 몇몇 분야가 아니라 전 산업 분야가 위기 상황입니다.
- 다음에 대만에 가면 꼭 돌아올게요.
- source_sentence: 경기내륙에는 얼마나 비가 올 예정이야?
sentences:
- 특히 시험장 입구는 시험장 출입구가 통일되어 있으며, 시험장 출입구마다 손 소독 및 발열 검사를 실시하고 있습니다.
- 좋은 파도가 들어오는 때는 다른 것 말고 서핑합시다.
- 오늘 조치들은 소상공인 등이 가장 긴급하게 요청하는 금융 지원 대책들입니다.
- source_sentence: 학교 성적표는 메일로 받는게 우편보다 편하고 좋을거야.
sentences:
- 학교 성적표를 기존 우편 대신 메일로 받아보세요.
- 청산리 봉고동 전투 100주년을 기념합니다
- 시몬 역에서 잘 걸어요.
- source_sentence: 주요 대책으로는 출산율 제고를 위한 주거·출산·보육 등 종합적 지원과 고령자 계속고용 활성화 및 고령자 맞춤형 일자리
지원, 노인복지 확대 등 고령화대응 정책노력의 지속이다.
sentences:
- 이중 ‘40대 일자리 대책’은 ▲직업훈련·교육 및 생계비 지원 ▲고용 지원 ▲창업 지원 ▲산업·지역 지원 등 40대 맞춤형 지원 방안이 담길
예정이다.
- 사람이 많이 붐빌때는 조금 불편함은 감안해야 합니다.
- 위치, 숙소 청결도, 호스트 모두 최고였어요.
model-index:
- name: SentenceTransformer based on klue/roberta-base
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.9618565828583842
name: Pearson Cosine
- type: spearman_cosine
value: 0.9203242816571715
name: Spearman Cosine
---
# SentenceTransformer based on klue/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [klue/roberta-base](https://huggingface.co/klue/roberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) <!-- at revision 02f94ba5e3fcb7e2a58a390b8639b0fac974a8da -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'주요 대책으로는 출산율 제고를 위한 주거·출산·보육 등 종합적 지원과 고령자 계속고용 활성화 및 고령자 맞춤형 일자리 지원, 노인복지 확대 등 고령화대응 정책노력의 지속이다.',
'이중 ‘40대 일자리 대책’은 ▲직업훈련·교육 및 생계비 지원 ▲고용 지원 ▲창업 지원 ▲산업·지역 지원 등 40대 맞춤형 지원 방안이 담길 예정이다.',
'사람이 많이 붐빌때는 조금 불편함은 감안해야 합니다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.9619 |
| **spearman_cosine** | **0.9203** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,501 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 19.9 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 19.25 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------|
| <code>그리고 대만의 번화가인 시먼을 즐기기에 위치는 너무 좋았습니다.</code> | <code>그리고 대만 시내 중심가인 시몬을 즐기기에 좋은 장소였습니다.</code> | <code>0.7</code> |
| <code>또 최근 1주일 간 해외 유입 확진자는 140명, 일평균 20명으로 전 주 대비 일평균 2명 늘었다.</code> | <code>게다가, 해외로의 확인된 유입의 수는 전주에 비해 하루 평균 2개 증가하여 140개 그리고 하루 평균 20개가 되었습니다.</code> | <code>0.58</code> |
| <code>한국인으로서 신경쓰이는 점 적겠습니다.</code> | <code>저희 방의 경우 바닥이 삐그덕 거리는 부준이 조금더 신경쓰이는 부분이었습니다.</code> | <code>0.16</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|
| 0.7610 | 500 | 0.0277 | - |
| 1.0 | 657 | - | 0.9102 |
| 1.5221 | 1000 | 0.0082 | 0.9137 |
| 2.0 | 1314 | - | 0.9174 |
| 2.2831 | 1500 | 0.0052 | - |
| 3.0 | 1971 | - | 0.9199 |
| 3.0441 | 2000 | 0.0034 | 0.9192 |
| 3.8052 | 2500 | 0.0026 | - |
| 4.0 | 2628 | - | 0.9203 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on klue/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [klue/roberta-base](https://huggingface.co/klue/roberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) <!-- at revision 02f94ba5e3fcb7e2a58a390b8639b0fac974a8da -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'주요 대책으로는 출산율 제고를 위한 주거·출산·보육 등 종합적 지원과 고령자 계속고용 활성화 및 고령자 맞춤형 일자리 지원, 노인복지 확대 등 고령화대응 정책노력의 지속이다.',
'이중 ‘40대 일자리 대책’은 ▲직업훈련·교육 및 생계비 지원 ▲고용 지원 ▲창업 지원 ▲산업·지역 지원 등 40대 맞춤형 지원 방안이 담길 예정이다.',
'사람이 많이 붐빌때는 조금 불편함은 감안해야 합니다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.9619 |
| **spearman_cosine** | **0.9203** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,501 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 19.9 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 19.25 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------|
| <code>그리고 대만의 번화가인 시먼을 즐기기에 위치는 너무 좋았습니다.</code> | <code>그리고 대만 시내 중심가인 시몬을 즐기기에 좋은 장소였습니다.</code> | <code>0.7</code> |
| <code>또 최근 1주일 간 해외 유입 확진자는 140명, 일평균 20명으로 전 주 대비 일평균 2명 늘었다.</code> | <code>게다가, 해외로의 확인된 유입의 수는 전주에 비해 하루 평균 2개 증가하여 140개 그리고 하루 평균 20개가 되었습니다.</code> | <code>0.58</code> |
| <code>한국인으로서 신경쓰이는 점 적겠습니다.</code> | <code>저희 방의 경우 바닥이 삐그덕 거리는 부준이 조금더 신경쓰이는 부분이었습니다.</code> | <code>0.16</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|
| 0.7610 | 500 | 0.0277 | - |
| 1.0 | 657 | - | 0.9102 |
| 1.5221 | 1000 | 0.0082 | 0.9137 |
| 2.0 | 1314 | - | 0.9174 |
| 2.2831 | 1500 | 0.0052 | - |
| 3.0 | 1971 | - | 0.9199 |
| 3.0441 | 2000 | 0.0034 | 0.9192 |
| 3.8052 | 2500 | 0.0026 | - |
| 4.0 | 2628 | - | 0.9203 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "klue/roberta-base", "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:10501", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "차로 15분 거리에 베네치아 로마광장에 도착할 수 있습니다.", "sentences": ["베니스 로마 광장까지 차로 15분 걸립니다.", "정말 너무나도 깨끗하고 편안한 숙소입니다.", "처음에 집찾기가 조금 힘들었지만 집이 있는 골목까지는 가기 쉬워요!"]}, {"source_sentence": "다음번 대만 방문시 꼭 다시 오겠습니다.", "sentences": ["오늘 날씨는 비가 올까? 아니면 맑을까?", "몇몇 분야가 아니라 전 산업 분야가 위기 상황입니다.", "다음에 대만에 가면 꼭 돌아올게요."]}, {"source_sentence": "경기내륙에는 얼마나 비가 올 예정이야?", "sentences": ["특히 시험장 입구는 시험장 출입구가 통일되어 있으며, 시험장 출입구마다 손 소독 및 발열 검사를 실시하고 있습니다.", "좋은 파도가 들어오는 때는 다른 것 말고 서핑합시다.", "오늘 조치들은 소상공인 등이 가장 긴급하게 요청하는 금융 지원 대책들입니다."]}, {"source_sentence": "학교 성적표는 메일로 받는게 우편보다 편하고 좋을거야.", "sentences": ["학교 성적표를 기존 우편 대신 메일로 받아보세요.", "청산리 봉고동 전투 100주년을 기념합니다", "시몬 역에서 잘 걸어요."]}, {"source_sentence": "주요 대책으로는 출산율 제고를 위한 주거·출산·보육 등 종합적 지원과 고령자 계속고용 활성화 및 고령자 맞춤형 일자리 지원, 노인복지 확대 등 고령화대응 정책노력의 지속이다.", "sentences": ["이중 ‘40대 일자리 대책’은 ▲직업훈련·교육 및 생계비 지원 ▲고용 지원 ▲창업 지원 ▲산업·지역 지원 등 40대 맞춤형 지원 방안이 담길 예정이다.", "사람이 많이 붐빌때는 조금 불편함은 감안해야 합니다.", "위치, 숙소 청결도, 호스트 모두 최고였어요."]}], "model-index": [{"name": "SentenceTransformer based on klue/roberta-base", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "pearson_cosine", "value": 0.9618565828583842, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.9203242816571715, "name": "Spearman Cosine"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 46,854 |
fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184
|
fine-tuned
|
feature-extraction
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"custom_code",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-02T15:16:45Z |
2024-05-02T15:17:00+00:00
| 6 | 0 |
---
{}
---
# fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184
## Model Description
fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name, trust_remote_code=True)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| null |
Non_BioNLP
|
# fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184
## Model Description
fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name, trust_remote_code=True)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
|
{}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,855 |
jeff-RQ/new-test-model
|
jeff-RQ
|
image-to-text
|
[
"transformers",
"pytorch",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-07-04T14:52:07Z |
2023-07-05T15:01:24+00:00
| 144 | 0 |
---
language: en
license: mit
pipeline_tag: image-to-text
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
duplicated_from: Salesforce/blip2-opt-2.7b
---
# BLIP-2, OPT-2.7b, pre-trained only
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
| null |
Non_BioNLP
|
# BLIP-2, OPT-2.7b, pre-trained only
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
|
{"language": "en", "license": "mit", "pipeline_tag": "image-to-text", "tags": ["vision", "image-to-text", "image-captioning", "visual-question-answering"], "duplicated_from": "Salesforce/blip2-opt-2.7b"}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,856 |
irusl/05newa1
|
irusl
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"merges",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:finetune:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-15T09:01:46Z |
2024-07-15T09:04:58+00:00
| 6 | 0 |
---
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
datasets:
- teknium/OpenHermes-2.5
language:
- en
license: apache-2.0
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- merges
widget:
- example_title: Hermes 2 Pro Llama-3 Instruct Merge
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
results: []
---
# - Hermes-2 Θ Llama-3 8B

## Model Description
Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:

### Chat with a Meta-Cognitive Entity

### Ask for a structured JSON output:

# Prompt Format
Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5529|± |0.0145|
| | |acc_norm|0.5870|± |0.0144|
|arc_easy | 0|acc |0.8371|± |0.0076|
| | |acc_norm|0.8144|± |0.0080|
|boolq | 1|acc |0.8599|± |0.0061|
|hellaswag | 0|acc |0.6133|± |0.0049|
| | |acc_norm|0.7989|± |0.0040|
|openbookqa | 0|acc |0.3940|± |0.0219|
| | |acc_norm|0.4680|± |0.0223|
|piqa | 0|acc |0.8063|± |0.0092|
| | |acc_norm|0.8156|± |0.0090|
|winogrande | 0|acc |0.7372|± |0.0124|
```
Average: 72.59
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2441|± |0.0270|
|agieval_logiqa_en | 0|acc |0.3687|± |0.0189|
| | |acc_norm|0.3840|± |0.0191|
|agieval_lsat_ar | 0|acc |0.2304|± |0.0278|
| | |acc_norm|0.2174|± |0.0273|
|agieval_lsat_lr | 0|acc |0.5471|± |0.0221|
| | |acc_norm|0.5373|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.7670|± |0.0295|
| | |acc_norm|0.7379|± |0.0307|
|agieval_sat_en_without_passage| 0|acc |0.4417|± |0.0347|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_math | 0|acc |0.4000|± |0.0331|
| | |acc_norm|0.3455|± |0.0321|
```
Average: 44.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6000|± |0.0356|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6585|± |0.0247|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2340|± |0.0224|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|± |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2057|± |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5367|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4040|± |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.4970|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7075|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4821|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2295|± |0.0133|
|bigbench_snarks | 0|multiple_choice_grade|0.6906|± |0.0345|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5375|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.6270|± |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2216|± |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1594|± |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5367|± |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
| null |
Non_BioNLP
|
# - Hermes-2 Θ Llama-3 8B

## Model Description
Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:

### Chat with a Meta-Cognitive Entity

### Ask for a structured JSON output:

# Prompt Format
Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5529|± |0.0145|
| | |acc_norm|0.5870|± |0.0144|
|arc_easy | 0|acc |0.8371|± |0.0076|
| | |acc_norm|0.8144|± |0.0080|
|boolq | 1|acc |0.8599|± |0.0061|
|hellaswag | 0|acc |0.6133|± |0.0049|
| | |acc_norm|0.7989|± |0.0040|
|openbookqa | 0|acc |0.3940|± |0.0219|
| | |acc_norm|0.4680|± |0.0223|
|piqa | 0|acc |0.8063|± |0.0092|
| | |acc_norm|0.8156|± |0.0090|
|winogrande | 0|acc |0.7372|± |0.0124|
```
Average: 72.59
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2441|± |0.0270|
|agieval_logiqa_en | 0|acc |0.3687|± |0.0189|
| | |acc_norm|0.3840|± |0.0191|
|agieval_lsat_ar | 0|acc |0.2304|± |0.0278|
| | |acc_norm|0.2174|± |0.0273|
|agieval_lsat_lr | 0|acc |0.5471|± |0.0221|
| | |acc_norm|0.5373|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.7670|± |0.0295|
| | |acc_norm|0.7379|± |0.0307|
|agieval_sat_en_without_passage| 0|acc |0.4417|± |0.0347|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_math | 0|acc |0.4000|± |0.0331|
| | |acc_norm|0.3455|± |0.0321|
```
Average: 44.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6000|± |0.0356|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6585|± |0.0247|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2340|± |0.0224|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|± |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2057|± |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5367|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4040|± |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.4970|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7075|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4821|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2295|± |0.0133|
|bigbench_snarks | 0|multiple_choice_grade|0.6906|± |0.0345|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5375|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.6270|± |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2216|± |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1594|± |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5367|± |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
|
{"base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B", "datasets": ["teknium/OpenHermes-2.5"], "language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges"], "widget": [{"example_title": "Hermes 2 Pro Llama-3 Instruct Merge", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-Instruct-8B-Merge", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 46,857 |
muhtasham/finetuned-mlm_mini
|
muhtasham
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-03T01:33:36Z |
2022-12-03T01:52:06+00:00
| 11 | 0 |
---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuned-mlm_mini
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- type: accuracy
value: 0.86176
name: Accuracy
- type: f1
value: 0.925747679614988
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-mlm_mini
This model is a fine-tuned version of [muhtasham/bert-mini-mlm-finetuned-emotion](https://huggingface.co/muhtasham/bert-mini-mlm-finetuned-emotion) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6338
- Accuracy: 0.8618
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3398 | 2.55 | 500 | 0.2366 | 0.9032 | 0.9491 |
| 0.2028 | 5.1 | 1000 | 0.2750 | 0.8956 | 0.9449 |
| 0.1382 | 7.65 | 1500 | 0.2798 | 0.9034 | 0.9492 |
| 0.0899 | 10.2 | 2000 | 0.4193 | 0.882 | 0.9373 |
| 0.0624 | 12.76 | 2500 | 0.5203 | 0.864 | 0.9270 |
| 0.0417 | 15.31 | 3000 | 0.6338 | 0.8618 | 0.9257 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-mlm_mini
This model is a fine-tuned version of [muhtasham/bert-mini-mlm-finetuned-emotion](https://huggingface.co/muhtasham/bert-mini-mlm-finetuned-emotion) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6338
- Accuracy: 0.8618
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3398 | 2.55 | 500 | 0.2366 | 0.9032 | 0.9491 |
| 0.2028 | 5.1 | 1000 | 0.2750 | 0.8956 | 0.9449 |
| 0.1382 | 7.65 | 1500 | 0.2798 | 0.9034 | 0.9492 |
| 0.0899 | 10.2 | 2000 | 0.4193 | 0.882 | 0.9373 |
| 0.0624 | 12.76 | 2500 | 0.5203 | 0.864 | 0.9270 |
| 0.0417 | 15.31 | 3000 | 0.6338 | 0.8618 | 0.9257 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuned-mlm_mini", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "train", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.86176, "name": "Accuracy"}, {"type": "f1", "value": 0.925747679614988, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,858 |
dascim/greekbart
|
dascim
|
fill-mask
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"bart",
"fill-mask",
"gr",
"arxiv:2304.00869",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-14T12:03:48Z |
2024-10-15T07:49:37+00:00
| 20 | 0 |
---
language:
- gr
library_name: transformers
license: mit
pipeline_tag: fill-mask
tags:
- summarization
- bart
---
# GreekBART: The First Pretrained Greek Sequence-to-Sequence Model
## Introduction
GreekBART is a Greek sequence to sequence pretrained model based on [BART](https://huggingface.co/facebook/bart-large).
GreekBART is pretrained by learning to reconstruct a corrupted input sentence. A corpus of 76.9GB of Greek raw text is used to carry out the pretraining.
Unlike already existing BERT-based Greek language model (GreekBERT), GreekBART is particularly well-suited for generative tasks (such as abstractive summarization), since not only its encoder but also its decoder is pretrained.
In addition to base GreekBART that is pretrained from scratch on the reconstruction, we finetune it as wll on three tasks: `greekbart-news24-abstract` that can generate an abstract given a Greek news article, `greekbart-news24-title` that can generate an title given a Greek news article, and `greekbart-sentiment-classification` finetuned on a binary sentinement classification task.
| Model | Architecture | #layers | #params |
| ------------- |:-------------:| :-----:|:-----:|
| [GreekBART](https://huggingface.co/dascim/greekbart) | BASE | 12 | 165M |
| [GreekBART Abstract](https://huggingface.co/dascim/greekbart-news24-abstract) | BASE | 12 | 165M |
| [GreekBART Title](https://huggingface.co/dascim/greekbart-news24-title) | BASE | 12 | 165M |
| [GreekBART Sentiment Classification](https://huggingface.co/dascim/greekbart-sentiment-classification) | BASE | 12 | 165M |
<br>
paper: https://arxiv.org/pdf/2304.00869 \
github: https://github.com/iakovosevdaimon/GreekBART
## Usage
### Mask Prediction
```python
from transformers import pipeline
greekbart_fill_mask = pipeline("fill-mask", model="dascim/greekbart", tokenizer="dascim/greekbart")
results = greekbart_fill_mask("Η πρωτεύουσα της Ελλάδας είναι η <mask>")
results[0]
# {'score': 0.597200870513916, 'token': 7062, 'token_str': 'Αθήνα', 'sequence': 'Η πρωτεύουσα της Ελλάδας είναι η Αθήνα'},
```
### Abstract Generation
```python
text_sentence = 'Στην κατάθεση νοσηλεύτριας του Καραμανδάνειου Νοσοκομείου Πάτρας Παναγιώτας Τσεντούρου, η οποία εργαζόταν όταν εισήχθη στις 8 Απριλίου 2021 η Τζωρτζίνα, προχώρησε η διαδικασία ενώπιον του ΜΟΔ που δικάζει τη Ρούλα Πισπιρίγκου. Η νοσηλεύτρια κατέθεσε πως κατά την εισαγωγή του παιδιού "μου ανέφεραν πως είναι ένα παιδάκι που έχει χάσει τα αδελφάκια του και ότι είναι ιδιαίτερη περίπτωση" και εξιστόρησε τα γεγονότα της ημέρας εισαγωγής και της επομένης που η ίδια είχε βάρδια στην παιδιατρική κλινική.'
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM
)
tokenizer = AutoTokenizer.from_pretrained("dascim/greekbart-news24-abstract")
model = AutoModelForSeq2SeqLM.from_pretrained("dascim/greekbart-news24-abstract")
input_ids = tokenizer.encode(text_sentence, add_special_tokens=True, return_tensors='pt')
model.eval()
predict = model.generate(input_ids, max_length=100)[0]
tokenizer.decode(predict, skip_special_tokens=True)
#'Η νοσηλεύτρια κατέθεσε πως κατά την εισαγωγή του παιδιού "μου ανέφεραν πως είναι ένα παιδάκι που έχει χάσει τα αδελφάκια του και ότι είναι ιδιαίτερη περίπτωση".'
```
### Title Generation
```python
text_sentence = 'Στην κατάθεση νοσηλεύτριας του Καραμανδάνειου Νοσοκομείου Πάτρας Παναγιώτας Τσεντούρου, η οποία εργαζόταν όταν εισήχθη στις 8 Απριλίου 2021 η Τζωρτζίνα, προχώρησε η διαδικασία ενώπιον του ΜΟΔ που δικάζει τη Ρούλα Πισπιρίγκου. Η νοσηλεύτρια κατέθεσε πως κατά την εισαγωγή του παιδιού "μου ανέφεραν πως είναι ένα παιδάκι που έχει χάσει τα αδελφάκια του και ότι είναι ιδιαίτερη περίπτωση" και εξιστόρησε τα γεγονότα της ημέρας εισαγωγής και της επομένης που η ίδια είχε βάρδια στην παιδιατρική κλινική.'
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM
)
tokenizer = AutoTokenizer.from_pretrained("dascim/greekbart-news24-title")
model = AutoModelForSeq2SeqLM.from_pretrained("dascim/greekbart-news24-title")
input_ids = tokenizer.encode(text_sentence, add_special_tokens=True, return_tensors='pt')
model.eval()
predict = model.generate(input_ids, max_length=100)[0]
tokenizer.decode(predict, skip_special_tokens=True)
# 'Πάτρα: Κατάθεση νοσηλεύτριας για την εισαγωγή της Τζωρτζίνας στο νοσοκομείο'
```
### Sentiment Prediction
```python
text_sentence = "Ο ελληνικός πολιτισμός είναι ένας από τους πιο πλούσιους και αναγνωρισμένους πολιτισμούς."
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification
)
tokenizer = AutoTokenizer.from_pretrained("dascim/greekbart-sentiment-classification")
model = AutoModelForSequenceClassification.from_pretrained("dascim/greekbart-sentiment-classification")
input_ids = tokenizer.encode(text_sentence, add_special_tokens=True, return_tensors='pt')
model.eval()
predict = model(input_ids)[0]
print("negative" if predict.argmax(dim=-1).item()==1 else "positive")
# positive
```
## Authors
GreekBART was trained and evaluated ar École Polytechnique by Iakovos Evdaimon, Hadi Abdine, Christos Xypolopoulos, Stamatis Outsios, Michalis Vazirgiannis and Giorgos Stamou.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{evdaimon-etal-2024-greekbart,
title = "{G}reek{BART}: The First Pretrained {G}reek Sequence-to-Sequence Model",
author = "Evdaimon, Iakovos and
Abdine, Hadi and
Xypolopoulos, Christos and
Outsios, Stamatis and
Vazirgiannis, Michalis and
Stamou, Giorgos",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.700",
pages = "7949--7962",
}
```
| null |
Non_BioNLP
|
# GreekBART: The First Pretrained Greek Sequence-to-Sequence Model
## Introduction
GreekBART is a Greek sequence to sequence pretrained model based on [BART](https://huggingface.co/facebook/bart-large).
GreekBART is pretrained by learning to reconstruct a corrupted input sentence. A corpus of 76.9GB of Greek raw text is used to carry out the pretraining.
Unlike already existing BERT-based Greek language model (GreekBERT), GreekBART is particularly well-suited for generative tasks (such as abstractive summarization), since not only its encoder but also its decoder is pretrained.
In addition to base GreekBART that is pretrained from scratch on the reconstruction, we finetune it as wll on three tasks: `greekbart-news24-abstract` that can generate an abstract given a Greek news article, `greekbart-news24-title` that can generate an title given a Greek news article, and `greekbart-sentiment-classification` finetuned on a binary sentinement classification task.
| Model | Architecture | #layers | #params |
| ------------- |:-------------:| :-----:|:-----:|
| [GreekBART](https://huggingface.co/dascim/greekbart) | BASE | 12 | 165M |
| [GreekBART Abstract](https://huggingface.co/dascim/greekbart-news24-abstract) | BASE | 12 | 165M |
| [GreekBART Title](https://huggingface.co/dascim/greekbart-news24-title) | BASE | 12 | 165M |
| [GreekBART Sentiment Classification](https://huggingface.co/dascim/greekbart-sentiment-classification) | BASE | 12 | 165M |
<br>
paper: https://arxiv.org/pdf/2304.00869 \
github: https://github.com/iakovosevdaimon/GreekBART
## Usage
### Mask Prediction
```python
from transformers import pipeline
greekbart_fill_mask = pipeline("fill-mask", model="dascim/greekbart", tokenizer="dascim/greekbart")
results = greekbart_fill_mask("Η πρωτεύουσα της Ελλάδας είναι η <mask>")
results[0]
# {'score': 0.597200870513916, 'token': 7062, 'token_str': 'Αθήνα', 'sequence': 'Η πρωτεύουσα της Ελλάδας είναι η Αθήνα'},
```
### Abstract Generation
```python
text_sentence = 'Στην κατάθεση νοσηλεύτριας του Καραμανδάνειου Νοσοκομείου Πάτρας Παναγιώτας Τσεντούρου, η οποία εργαζόταν όταν εισήχθη στις 8 Απριλίου 2021 η Τζωρτζίνα, προχώρησε η διαδικασία ενώπιον του ΜΟΔ που δικάζει τη Ρούλα Πισπιρίγκου. Η νοσηλεύτρια κατέθεσε πως κατά την εισαγωγή του παιδιού "μου ανέφεραν πως είναι ένα παιδάκι που έχει χάσει τα αδελφάκια του και ότι είναι ιδιαίτερη περίπτωση" και εξιστόρησε τα γεγονότα της ημέρας εισαγωγής και της επομένης που η ίδια είχε βάρδια στην παιδιατρική κλινική.'
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM
)
tokenizer = AutoTokenizer.from_pretrained("dascim/greekbart-news24-abstract")
model = AutoModelForSeq2SeqLM.from_pretrained("dascim/greekbart-news24-abstract")
input_ids = tokenizer.encode(text_sentence, add_special_tokens=True, return_tensors='pt')
model.eval()
predict = model.generate(input_ids, max_length=100)[0]
tokenizer.decode(predict, skip_special_tokens=True)
#'Η νοσηλεύτρια κατέθεσε πως κατά την εισαγωγή του παιδιού "μου ανέφεραν πως είναι ένα παιδάκι που έχει χάσει τα αδελφάκια του και ότι είναι ιδιαίτερη περίπτωση".'
```
### Title Generation
```python
text_sentence = 'Στην κατάθεση νοσηλεύτριας του Καραμανδάνειου Νοσοκομείου Πάτρας Παναγιώτας Τσεντούρου, η οποία εργαζόταν όταν εισήχθη στις 8 Απριλίου 2021 η Τζωρτζίνα, προχώρησε η διαδικασία ενώπιον του ΜΟΔ που δικάζει τη Ρούλα Πισπιρίγκου. Η νοσηλεύτρια κατέθεσε πως κατά την εισαγωγή του παιδιού "μου ανέφεραν πως είναι ένα παιδάκι που έχει χάσει τα αδελφάκια του και ότι είναι ιδιαίτερη περίπτωση" και εξιστόρησε τα γεγονότα της ημέρας εισαγωγής και της επομένης που η ίδια είχε βάρδια στην παιδιατρική κλινική.'
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM
)
tokenizer = AutoTokenizer.from_pretrained("dascim/greekbart-news24-title")
model = AutoModelForSeq2SeqLM.from_pretrained("dascim/greekbart-news24-title")
input_ids = tokenizer.encode(text_sentence, add_special_tokens=True, return_tensors='pt')
model.eval()
predict = model.generate(input_ids, max_length=100)[0]
tokenizer.decode(predict, skip_special_tokens=True)
# 'Πάτρα: Κατάθεση νοσηλεύτριας για την εισαγωγή της Τζωρτζίνας στο νοσοκομείο'
```
### Sentiment Prediction
```python
text_sentence = "Ο ελληνικός πολιτισμός είναι ένας από τους πιο πλούσιους και αναγνωρισμένους πολιτισμούς."
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification
)
tokenizer = AutoTokenizer.from_pretrained("dascim/greekbart-sentiment-classification")
model = AutoModelForSequenceClassification.from_pretrained("dascim/greekbart-sentiment-classification")
input_ids = tokenizer.encode(text_sentence, add_special_tokens=True, return_tensors='pt')
model.eval()
predict = model(input_ids)[0]
print("negative" if predict.argmax(dim=-1).item()==1 else "positive")
# positive
```
## Authors
GreekBART was trained and evaluated ar École Polytechnique by Iakovos Evdaimon, Hadi Abdine, Christos Xypolopoulos, Stamatis Outsios, Michalis Vazirgiannis and Giorgos Stamou.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{evdaimon-etal-2024-greekbart,
title = "{G}reek{BART}: The First Pretrained {G}reek Sequence-to-Sequence Model",
author = "Evdaimon, Iakovos and
Abdine, Hadi and
Xypolopoulos, Christos and
Outsios, Stamatis and
Vazirgiannis, Michalis and
Stamou, Giorgos",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.700",
pages = "7949--7962",
}
```
|
{"language": ["gr"], "library_name": "transformers", "license": "mit", "pipeline_tag": "fill-mask", "tags": ["summarization", "bart"]}
|
task
|
[
"SUMMARIZATION"
] | 46,859 |
Volavion/bert-base-multilingual-uncased-temperature-cls
|
Volavion
| null |
[
"safetensors",
"bert",
"en",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:mit",
"region:us"
] | 2025-01-15T10:27:50Z |
2025-01-15T11:01:31+00:00
| 18 | 1 |
---
base_model:
- google-bert/bert-base-multilingual-uncased
language:
- en
license: mit
---
# BERT-Based Classification Model for Optimal Temperature Selection
This model leverages a BERT-based classification model to analyze input prompts and identify the most suitable generation temperature, enhancing text generation quality and relevance from our paper related to temperature.
## Overview
The model classifies input text into six distinct abilities, providing a probability distribution for each:
- **Causal Reasoning**
- **Creativity**
- **In-Context Learning**
- **Instruction Following**
- **Machine Translation**
- **Summarization**
## Features
- **Pre-trained Model**: Uses the multilingual BERT model: `Volavion/bert-base-multilingual-uncased-Temperature-CLS`.
- **Tokenization**: Processes text inputs into numerical formats compatible with the model.
- **Classification Output**: Provides probabilities for each class, allowing precise evaluation of the prompt's capabilities.
## Installation
1. Clone the repository if necessary:
```bash
git clone https://huggingface.co/Volavion/bert-base-multilingual-uncased-temperature-cls
cd bert-base-multilingual-uncased-temperature-cls
```
2. Install the required Python libraries:
```bash
pip install transformers torch numpy
```
## Usage
1. Load the tokenizer and model:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "Volavion/bert-base-multilingual-uncased-Temperature-CLS"
tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=True)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
2. Tokenize your input text:
```python
input_text = "Your input prompt here."
encoded_dict = tokenizer.encode_plus(
input_text,
add_special_tokens=True,
max_length=512,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors="pt"
)
```
3. Perform inference:
```python
import torch
import numpy as np
input_ids = encoded_dict["input_ids"].to(device)
attention_mask = encoded_dict["attention_mask"].to(device)
model.eval()
with torch.no_grad():
outputs = model(input_ids, attention_mask=attention_mask)
logits = outputs.logits.cpu().numpy()
probabilities = np.exp(logits - np.max(logits, axis=1, keepdims=True))
probabilities /= np.sum(probabilities, axis=1, keepdims=True)
```
4. Map probabilities to abilities:
```python
ability_mapping = {0: "Causal Reasoning", 1: "Creativity", 2: "In-Context Learning",
3: "Instruction Following", 4: "Machine Translation", 5: "Summarization"}
for prob, ability in zip(probabilities[0], ability_mapping.values()):
print(f"{ability}: {prob*100:.2f}%")
```
## Example Output
```plaintext
Ability Classification Probabilities:
Causal Reasoning: 15.30%
Creativity: 20.45%
In-Context Learning: 18.22%
Instruction Following: 12.78%
Machine Translation: 21.09%
Summarization: 12.16%
```
## Device Compatibility
The model supports GPU acceleration for faster inference. It will automatically detect and utilize a GPU if available; otherwise, it defaults to CPU.
## Contributing
Contributions are welcome! Feel free to fork the repository, create a branch, and submit a pull request.
## License
This project is licensed under the [MIT License](LICENSE).
| null |
Non_BioNLP
|
# BERT-Based Classification Model for Optimal Temperature Selection
This model leverages a BERT-based classification model to analyze input prompts and identify the most suitable generation temperature, enhancing text generation quality and relevance from our paper related to temperature.
## Overview
The model classifies input text into six distinct abilities, providing a probability distribution for each:
- **Causal Reasoning**
- **Creativity**
- **In-Context Learning**
- **Instruction Following**
- **Machine Translation**
- **Summarization**
## Features
- **Pre-trained Model**: Uses the multilingual BERT model: `Volavion/bert-base-multilingual-uncased-Temperature-CLS`.
- **Tokenization**: Processes text inputs into numerical formats compatible with the model.
- **Classification Output**: Provides probabilities for each class, allowing precise evaluation of the prompt's capabilities.
## Installation
1. Clone the repository if necessary:
```bash
git clone https://huggingface.co/Volavion/bert-base-multilingual-uncased-temperature-cls
cd bert-base-multilingual-uncased-temperature-cls
```
2. Install the required Python libraries:
```bash
pip install transformers torch numpy
```
## Usage
1. Load the tokenizer and model:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "Volavion/bert-base-multilingual-uncased-Temperature-CLS"
tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=True)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
2. Tokenize your input text:
```python
input_text = "Your input prompt here."
encoded_dict = tokenizer.encode_plus(
input_text,
add_special_tokens=True,
max_length=512,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors="pt"
)
```
3. Perform inference:
```python
import torch
import numpy as np
input_ids = encoded_dict["input_ids"].to(device)
attention_mask = encoded_dict["attention_mask"].to(device)
model.eval()
with torch.no_grad():
outputs = model(input_ids, attention_mask=attention_mask)
logits = outputs.logits.cpu().numpy()
probabilities = np.exp(logits - np.max(logits, axis=1, keepdims=True))
probabilities /= np.sum(probabilities, axis=1, keepdims=True)
```
4. Map probabilities to abilities:
```python
ability_mapping = {0: "Causal Reasoning", 1: "Creativity", 2: "In-Context Learning",
3: "Instruction Following", 4: "Machine Translation", 5: "Summarization"}
for prob, ability in zip(probabilities[0], ability_mapping.values()):
print(f"{ability}: {prob*100:.2f}%")
```
## Example Output
```plaintext
Ability Classification Probabilities:
Causal Reasoning: 15.30%
Creativity: 20.45%
In-Context Learning: 18.22%
Instruction Following: 12.78%
Machine Translation: 21.09%
Summarization: 12.16%
```
## Device Compatibility
The model supports GPU acceleration for faster inference. It will automatically detect and utilize a GPU if available; otherwise, it defaults to CPU.
## Contributing
Contributions are welcome! Feel free to fork the repository, create a branch, and submit a pull request.
## License
This project is licensed under the [MIT License](LICENSE).
|
{"base_model": ["google-bert/bert-base-multilingual-uncased"], "language": ["en"], "license": "mit"}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 46,860 |
r4ghu/distilbert-base-uncased-finetuned-clinc
|
r4ghu
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-12T05:42:37Z |
2023-09-13T01:19:35+00:00
| 12 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.9196774193548387
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7651
- Accuracy: 0.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2939 | 1.0 | 318 | 3.2756 | 0.7310 |
| 2.6165 | 2.0 | 636 | 1.8607 | 0.8365 |
| 1.5359 | 3.0 | 954 | 1.1427 | 0.8965 |
| 1.0074 | 4.0 | 1272 | 0.8470 | 0.9132 |
| 0.7906 | 5.0 | 1590 | 0.7651 | 0.9197 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7651
- Accuracy: 0.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2939 | 1.0 | 318 | 3.2756 | 0.7310 |
| 2.6165 | 2.0 | 636 | 1.8607 | 0.8365 |
| 1.5359 | 3.0 | 954 | 1.1427 | 0.8965 |
| 1.0074 | 4.0 | 1272 | 0.8470 | 0.9132 |
| 0.7906 | 5.0 | 1590 | 0.7651 | 0.9197 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
{"base_model": "distilbert-base-uncased", "datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9196774193548387, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,861 |
RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf
|
RichardErkhov
| null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | 2024-08-02T17:14:03Z |
2024-08-03T00:32:52+00:00
| 25 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Giraffe-13b-32k-v3 - GGUF
- Model creator: https://huggingface.co/abacusai/
- Original model: https://huggingface.co/abacusai/Giraffe-13b-32k-v3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Giraffe-13b-32k-v3.Q2_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q2_K.gguf) | Q2_K | 4.52GB |
| [Giraffe-13b-32k-v3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [Giraffe-13b-32k-v3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [Giraffe-13b-32k-v3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [Giraffe-13b-32k-v3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [Giraffe-13b-32k-v3.Q3_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K.gguf) | Q3_K | 5.9GB |
| [Giraffe-13b-32k-v3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [Giraffe-13b-32k-v3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [Giraffe-13b-32k-v3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [Giraffe-13b-32k-v3.Q4_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_0.gguf) | Q4_0 | 6.86GB |
| [Giraffe-13b-32k-v3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [Giraffe-13b-32k-v3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [Giraffe-13b-32k-v3.Q4_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_K.gguf) | Q4_K | 7.33GB |
| [Giraffe-13b-32k-v3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [Giraffe-13b-32k-v3.Q4_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_1.gguf) | Q4_1 | 7.61GB |
| [Giraffe-13b-32k-v3.Q5_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_0.gguf) | Q5_0 | 8.36GB |
| [Giraffe-13b-32k-v3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [Giraffe-13b-32k-v3.Q5_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_K.gguf) | Q5_K | 8.6GB |
| [Giraffe-13b-32k-v3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [Giraffe-13b-32k-v3.Q5_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_1.gguf) | Q5_1 | 9.1GB |
| [Giraffe-13b-32k-v3.Q6_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q6_K.gguf) | Q6_K | 9.95GB |
| [Giraffe-13b-32k-v3.Q8_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
datasets:
- togethercomputer/RedPajama-Data-1T
- shahules786/orca-chat
- anon8231489123/ShareGPT_Vicuna_unfiltered
tags:
- llama2
license: llama2
---

# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Llama2 13B finetuned to process longer (32K) contexts using interpolation and then further instruct finetuned with ShareGPT and Orca-Chat.
## Model Details
### Model Description
This is next version of the model trained and evaluated as part of the experiments described in the repo
[http://github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context). This version
was trained with a scaling factor of 8 and shows better reasoning and math abilites on State of the Art benchmarks
- **Developed by:** [Abacus.AI](https://abacus.ai)
- **Model type:** Transformer based autoregressive causal language model
- **License:** Llama 2 Community License: https://github.com/facebookresearch/llama/blob/main/LICENSE
- **Finetuned from model:** Llama V2 13B
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [http://github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context)
### Direct Use
Since the model is instruct finetuned it can also be directly used for various prompted tasks. We have tested
it on open book question answering using the long context to supply search results.
## Bias, Risks, and Limitations
The model has not been evaluated for safety and is only intended for research and experiments.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Giraffe-13b-32k-v3 - GGUF
- Model creator: https://huggingface.co/abacusai/
- Original model: https://huggingface.co/abacusai/Giraffe-13b-32k-v3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Giraffe-13b-32k-v3.Q2_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q2_K.gguf) | Q2_K | 4.52GB |
| [Giraffe-13b-32k-v3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [Giraffe-13b-32k-v3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [Giraffe-13b-32k-v3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [Giraffe-13b-32k-v3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [Giraffe-13b-32k-v3.Q3_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K.gguf) | Q3_K | 5.9GB |
| [Giraffe-13b-32k-v3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [Giraffe-13b-32k-v3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [Giraffe-13b-32k-v3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [Giraffe-13b-32k-v3.Q4_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_0.gguf) | Q4_0 | 6.86GB |
| [Giraffe-13b-32k-v3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [Giraffe-13b-32k-v3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [Giraffe-13b-32k-v3.Q4_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_K.gguf) | Q4_K | 7.33GB |
| [Giraffe-13b-32k-v3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [Giraffe-13b-32k-v3.Q4_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_1.gguf) | Q4_1 | 7.61GB |
| [Giraffe-13b-32k-v3.Q5_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_0.gguf) | Q5_0 | 8.36GB |
| [Giraffe-13b-32k-v3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [Giraffe-13b-32k-v3.Q5_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_K.gguf) | Q5_K | 8.6GB |
| [Giraffe-13b-32k-v3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [Giraffe-13b-32k-v3.Q5_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_1.gguf) | Q5_1 | 9.1GB |
| [Giraffe-13b-32k-v3.Q6_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q6_K.gguf) | Q6_K | 9.95GB |
| [Giraffe-13b-32k-v3.Q8_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
datasets:
- togethercomputer/RedPajama-Data-1T
- shahules786/orca-chat
- anon8231489123/ShareGPT_Vicuna_unfiltered
tags:
- llama2
license: llama2
---

# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Llama2 13B finetuned to process longer (32K) contexts using interpolation and then further instruct finetuned with ShareGPT and Orca-Chat.
## Model Details
### Model Description
This is next version of the model trained and evaluated as part of the experiments described in the repo
[http://github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context). This version
was trained with a scaling factor of 8 and shows better reasoning and math abilites on State of the Art benchmarks
- **Developed by:** [Abacus.AI](https://abacus.ai)
- **Model type:** Transformer based autoregressive causal language model
- **License:** Llama 2 Community License: https://github.com/facebookresearch/llama/blob/main/LICENSE
- **Finetuned from model:** Llama V2 13B
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [http://github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context)
### Direct Use
Since the model is instruct finetuned it can also be directly used for various prompted tasks. We have tested
it on open book question answering using the long context to supply search results.
## Bias, Risks, and Limitations
The model has not been evaluated for safety and is only intended for research and experiments.
|
{}
|
task
|
[
"QUESTION_ANSWERING"
] | 46,862 |
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_wnli_128
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-03T16:11:58Z |
2023-02-03T16:40:16+00:00
| 129 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mobilebert_sa_GLUE_Experiment_data_aug_wnli_128
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- type: accuracy
value: 0.14084507042253522
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_data_aug_wnli_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9522
- Accuracy: 0.1408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6679 | 1.0 | 435 | 1.9522 | 0.1408 |
| 0.4943 | 2.0 | 870 | 3.6522 | 0.1127 |
| 0.4405 | 3.0 | 1305 | 4.5118 | 0.1268 |
| 0.4086 | 4.0 | 1740 | 6.0437 | 0.1127 |
| 0.3818 | 5.0 | 2175 | 9.5931 | 0.1127 |
| 0.3492 | 6.0 | 2610 | 6.9665 | 0.1408 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_data_aug_wnli_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9522
- Accuracy: 0.1408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6679 | 1.0 | 435 | 1.9522 | 0.1408 |
| 0.4943 | 2.0 | 870 | 3.6522 | 0.1127 |
| 0.4405 | 3.0 | 1305 | 4.5118 | 0.1268 |
| 0.4086 | 4.0 | 1740 | 6.0437 | 0.1127 |
| 0.3818 | 5.0 | 2175 | 9.5931 | 0.1127 |
| 0.3492 | 6.0 | 2610 | 6.9665 | 0.1408 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mobilebert_sa_GLUE_Experiment_data_aug_wnli_128", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.14084507042253522, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,863 |
RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf
|
RichardErkhov
| null |
[
"gguf",
"arxiv:2402.17733",
"endpoints_compatible",
"region:us"
] | 2024-05-11T10:07:33Z |
2024-05-11T23:15:22+00:00
| 102 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TowerBase-7B-v0.1 - GGUF
- Model creator: https://huggingface.co/Unbabel/
- Original model: https://huggingface.co/Unbabel/TowerBase-7B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TowerBase-7B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q2_K.gguf) | Q2_K | 2.36GB |
| [TowerBase-7B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [TowerBase-7B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [TowerBase-7B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [TowerBase-7B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [TowerBase-7B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K.gguf) | Q3_K | 3.07GB |
| [TowerBase-7B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [TowerBase-7B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [TowerBase-7B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [TowerBase-7B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_0.gguf) | Q4_0 | 3.56GB |
| [TowerBase-7B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [TowerBase-7B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [TowerBase-7B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_K.gguf) | Q4_K | 3.8GB |
| [TowerBase-7B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [TowerBase-7B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_1.gguf) | Q4_1 | 3.95GB |
| [TowerBase-7B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_0.gguf) | Q5_0 | 4.33GB |
| [TowerBase-7B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [TowerBase-7B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_K.gguf) | Q5_K | 4.45GB |
| [TowerBase-7B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [TowerBase-7B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_1.gguf) | Q5_1 | 4.72GB |
| [TowerBase-7B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q6_K.gguf) | Q6_K | 5.15GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
model-index:
- name: TowerBase-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 13.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
---
# Model Card for TowerBase-7B-v0.1
## Model Details
### Model Description
TowerBase-7B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages — English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian — and bilingual data. TowerBase-7B-v0.1 is the first model in the series.
The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1).
We will release more details in the upcoming technical report.
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 7B parameter model built on top of Llama 2 by continuing pretraining on multilingual data.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Intended uses & limitations
The model is intended for research purposes in the 10 languages it supports.
The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime.
It can also be fine-tuned to perform these tasks in a zero-shot fashion (see [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1), as well as other multilingual tasks.
### Out-of-Scope Use
The model is not guaranteed to perform well for languages other than the 10 languages it supports.
## Bias, Risks, and Limitations
TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Unbabel/TowerBase-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "English: My name is TowerBase.\nPortuguese:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Training Data
Filtered versions of [mc4](https://huggingface.co/datasets/mc4) and bilingual data from various sources (e.g., [OPUS](https://opus.nlpl.eu/)).
## Citation
```bibtex
@misc{tower_llm_2024,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins},
year={2024},
eprint={2402.17733},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TowerBase-7B-v0.1 - GGUF
- Model creator: https://huggingface.co/Unbabel/
- Original model: https://huggingface.co/Unbabel/TowerBase-7B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TowerBase-7B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q2_K.gguf) | Q2_K | 2.36GB |
| [TowerBase-7B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [TowerBase-7B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [TowerBase-7B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [TowerBase-7B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [TowerBase-7B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K.gguf) | Q3_K | 3.07GB |
| [TowerBase-7B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [TowerBase-7B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [TowerBase-7B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [TowerBase-7B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_0.gguf) | Q4_0 | 3.56GB |
| [TowerBase-7B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [TowerBase-7B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [TowerBase-7B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_K.gguf) | Q4_K | 3.8GB |
| [TowerBase-7B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [TowerBase-7B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_1.gguf) | Q4_1 | 3.95GB |
| [TowerBase-7B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_0.gguf) | Q5_0 | 4.33GB |
| [TowerBase-7B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [TowerBase-7B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_K.gguf) | Q5_K | 4.45GB |
| [TowerBase-7B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [TowerBase-7B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_1.gguf) | Q5_1 | 4.72GB |
| [TowerBase-7B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q6_K.gguf) | Q6_K | 5.15GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
model-index:
- name: TowerBase-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 13.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
---
# Model Card for TowerBase-7B-v0.1
## Model Details
### Model Description
TowerBase-7B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages — English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian — and bilingual data. TowerBase-7B-v0.1 is the first model in the series.
The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1).
We will release more details in the upcoming technical report.
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 7B parameter model built on top of Llama 2 by continuing pretraining on multilingual data.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Intended uses & limitations
The model is intended for research purposes in the 10 languages it supports.
The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime.
It can also be fine-tuned to perform these tasks in a zero-shot fashion (see [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1), as well as other multilingual tasks.
### Out-of-Scope Use
The model is not guaranteed to perform well for languages other than the 10 languages it supports.
## Bias, Risks, and Limitations
TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Unbabel/TowerBase-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "English: My name is TowerBase.\nPortuguese:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Training Data
Filtered versions of [mc4](https://huggingface.co/datasets/mc4) and bilingual data from various sources (e.g., [OPUS](https://opus.nlpl.eu/)).
## Citation
```bibtex
@misc{tower_llm_2024,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins},
year={2024},
eprint={2402.17733},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{}
|
task
|
[
"TRANSLATION"
] | 46,864 |
naksu/distilbert-base-uncased-finetuned-sst2
|
naksu
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-23T06:33:51Z |
2023-01-23T18:15:34+00:00
| 114 | 0 |
---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: train
args: sst2
metrics:
- type: accuracy
value: 0.8623529411764705
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4654
- Accuracy: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 27 | 0.4003 | 0.8224 |
| No log | 2.0 | 54 | 0.3516 | 0.8459 |
| No log | 3.0 | 81 | 0.4151 | 0.8376 |
| No log | 4.0 | 108 | 0.4419 | 0.8635 |
| No log | 5.0 | 135 | 0.4654 | 0.8624 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4654
- Accuracy: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 27 | 0.4003 | 0.8224 |
| No log | 2.0 | 54 | 0.3516 | 0.8459 |
| No log | 3.0 | 81 | 0.4151 | 0.8376 |
| No log | 4.0 | 108 | 0.4419 | 0.8635 |
| No log | 5.0 | 135 | 0.4654 | 0.8624 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "sst2", "split": "train", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.8623529411764705, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,865 |
fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-24T15:37:03Z |
2024-05-24T15:37:35+00:00
| 9 | 0 |
---
datasets:
- fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| null |
Non_BioNLP
|
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
{"datasets": ["fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,866 |
KarelDO/lstm.CEBaB_confounding.observational.absa.5-class.seed_43
|
KarelDO
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:OpenTable",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-10-14T04:31:04Z |
2022-10-14T04:32:12+00:00
| 20 | 0 |
---
datasets:
- OpenTable
language:
- en
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: lstm.CEBaB_confounding.observational.absa.5-class.seed_43
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: OpenTable OPENTABLE-ABSA
type: OpenTable
args: opentable-absa
metrics:
- type: accuracy
value: 0.6611587107303142
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lstm.CEBaB_confounding.observational.absa.5-class.seed_43
This model is a fine-tuned version of [lstm](https://huggingface.co/lstm) on the OpenTable OPENTABLE-ABSA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8687
- Accuracy: 0.6612
- Macro-f1: 0.6553
- Weighted-macro-f1: 0.6627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lstm.CEBaB_confounding.observational.absa.5-class.seed_43
This model is a fine-tuned version of [lstm](https://huggingface.co/lstm) on the OpenTable OPENTABLE-ABSA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8687
- Accuracy: 0.6612
- Macro-f1: 0.6553
- Weighted-macro-f1: 0.6627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
|
{"datasets": ["OpenTable"], "language": ["en"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "lstm.CEBaB_confounding.observational.absa.5-class.seed_43", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "OpenTable OPENTABLE-ABSA", "type": "OpenTable", "args": "opentable-absa"}, "metrics": [{"type": "accuracy", "value": 0.6611587107303142, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,867 |
mini1013/master_cate_top_bt5_4
|
mini1013
|
text-classification
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"model-index",
"region:us"
] | 2024-12-29T14:28:52Z |
2024-12-29T14:29:14+00:00
| 8 | 0 |
---
base_model: klue/roberta-base
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: '[시세이도] NEW 싱크로 스킨 래디언트 리프팅 파운데이션 SPF30/PA++++ 30ml 130 오팔 (#M)홈>메이크업>베이스메이크업
HMALL > 뷰티 > 메이크업 > 베이스메이크업'
- text: 어뮤즈 메타 픽싱 비건 쿠션 리필 (3종 택 1) 02 누드 (#M)홈>화장품/미용>베이스메이크업>파운데이션>쿠션형 Naverstore
> 화장품/미용 > 베이스메이크업 > 파운데이션 > 쿠션형
- text: 에스쁘아 프로 테일러 파운데이션 비 글로우 30ml MinSellAmount (#M)화장품/향수>베이스메이크업>파운데이션 Gmarket
> 뷰티 > 화장품/향수 > 베이스메이크업 > 파운데이션
- text: (현대백화점) 톰 포드 뷰티 셰이드 앤 일루미네이트 소프트 래디언스 파운데이션 SPF50/PA++++ 0.4 로즈 (#M)화장품/향수>베이스메이크업>파운데이션
Gmarket > 뷰티 > 화장품/향수 > 베이스메이크업 > 파운데이션
- text: '[정샘물] 마스터클래스 래디언트 쿠션(리필포함)(+코렉팅 베이스5mlx3개)(강남점) N1아이보리 (#M)11st>메이크업>페이스메이크업>파운데이션
11st > 뷰티 > 메이크업 > 페이스메이크업 > 파운데이션'
inference: true
model-index:
- name: SetFit with klue/roberta-base
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9475307038057129
name: Accuracy
---
# SetFit with klue/roberta-base
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [klue/roberta-base](https://huggingface.co/klue/roberta-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 5 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'에스쁘아 프로테일러 비글로우 스틱 파운데이션 13g 23호베이지 (#M)홈>화장품/미용>베이스메이크업>파운데이션>스틱형 Naverstore > 화장품/미용 > 베이스메이크업 > 파운데이션 > 스틱형'</li><li>'그라펜 에어커버 스틱 파운데이션 23호 베이지 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파운데이션 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파운데이션'</li><li>'바비 브라운 스킨 파운데이션 스틱-2.5 원 샌드 9g (#M)화장품/미용>베이스메이크업>파운데이션>크림형 Naverstore > 화장품/미용 > 베이스메이크업 > 파운데이션 > 크림형'</li></ul> |
| 1 | <ul><li>'정샘물 스킨 세팅 톤 코렉팅 베이스 40ml 글로잉 베이스 (#M)11st>메이크업>페이스메이크업>메이크업베이스 11st > 뷰티 > 메이크업 > 페이스메이크업 > 메이크업베이스'</li><li>'아이오페 퍼펙트 커버 메이크업베이스 35ml 2호 라이트퍼플 × 3개 (#M)쿠팡 홈>뷰티>메이크업>베이스 메이크업>베이스/프라이머 Coupang > 뷰티 > 메이크업 > 베이스 메이크업 > 베이스/프라이머'</li><li>'아이오페 퍼펙트 커버 베이스 35ml 2호-퍼플 (#M)홈>화장품/미용>베이스메이크업>메이크업베이스 Naverstore > 화장품/미용 > 베이스메이크업 > 메이크업베이스'</li></ul> |
| 0 | <ul><li>'헤라 글로우 래스팅 파운데이션 17C1 페탈 아이보리 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 베이스/프라이머 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 베이스/프라이머'</li><li>'[에스티 로더] 더블웨어 파운데이션 30ml SPF 10/PA++ (+프라이머 정품 ) 1W0 웜 포슬린 홈>기획 세트;홈>더블웨어;홈>더블 웨어;화장품/미용>베이스메이크업>파운데이션>리퀴드형;(#M)홈>전체상품 Naverstore > 베이스메이크업 > 파운데이션'</li><li>'에스쁘아 프로테일러 파운데이션 비 글로우 10ml 4호 베이지 × 1개 (#M)쿠팡 홈>뷰티>메이크업>베이스 메이크업>파운데이션 Coupang > 뷰티 > 로드샵 > 메이크업 > 베이스 메이크업 > 파운데이션'</li></ul> |
| 4 | <ul><li>'시세이도 스포츠 커버 파운데이션 20g S101 (#M)홈>화장품/미용>베이스메이크업>파운데이션>크림형 Naverstore > 화장품/미용 > 베이스메이크업 > 파운데이션 > 크림형'</li><li>'시세이도 스포츠 커버 파운데이션 20g S100 × 1개 Coupang > 뷰티 > 메이크업 > 베이스 메이크업 > 파운데이션;(#M)쿠팡 홈>뷰티>메이크업>베이스 메이크업>파운데이션 Coupang > 뷰티 > 메이크업 > 베이스 메이크업 > 파운데이션'</li><li>'에이지투웨니스 오리지날 샤이닝드롭 케이스+리필3개 (+커피쿠폰+폼20ml) 샤이닝드롭(화이트)23호케이스+리필3개_폼20ml (#M)화장품/미용>베이스메이크업>파운데이션>쿠션형 AD > Naverstore > 화장품/미용 > 베이스메이크업 > 파운데이션 > 크림형'</li></ul> |
| 3 | <ul><li>'매트 벨벳 스킨 컴팩트 스폰지 단품없음 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품'</li><li>'[BF적립] 엉크르 드 뽀 쿠션&리필 세트(+스탠딩 미러+5천LPOINT) 20호_15호 LOREAL > DepartmentLotteOn > 입생로랑 > Branded > 입생로랑 LOREAL > DepartmentLotteOn > 입생로랑 > Branded > 입생로랑'</li><li>'코튼 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9475 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_top_bt5_4")
# Run inference
preds = model("[시세이도] NEW 싱크로 스킨 래디언트 리프팅 파운데이션 SPF30/PA++++ 30ml 130 오팔 (#M)홈>메이크업>베이스메이크업 HMALL > 뷰티 > 메이크업 > 베이스메이크업")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 12 | 22.928 | 52 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 50 |
| 1 | 50 |
| 2 | 50 |
| 3 | 50 |
| 4 | 50 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 100
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-----:|:-------------:|:---------------:|
| 0.0026 | 1 | 0.521 | - |
| 0.1279 | 50 | 0.4636 | - |
| 0.2558 | 100 | 0.42 | - |
| 0.3836 | 150 | 0.292 | - |
| 0.5115 | 200 | 0.1539 | - |
| 0.6394 | 250 | 0.0626 | - |
| 0.7673 | 300 | 0.0343 | - |
| 0.8951 | 350 | 0.0071 | - |
| 1.0230 | 400 | 0.0023 | - |
| 1.1509 | 450 | 0.0005 | - |
| 1.2788 | 500 | 0.0006 | - |
| 1.4066 | 550 | 0.0003 | - |
| 1.5345 | 600 | 0.0002 | - |
| 1.6624 | 650 | 0.0001 | - |
| 1.7903 | 700 | 0.0002 | - |
| 1.9182 | 750 | 0.0006 | - |
| 2.0460 | 800 | 0.0002 | - |
| 2.1739 | 850 | 0.0001 | - |
| 2.3018 | 900 | 0.0 | - |
| 2.4297 | 950 | 0.0 | - |
| 2.5575 | 1000 | 0.0 | - |
| 2.6854 | 1050 | 0.0 | - |
| 2.8133 | 1100 | 0.0 | - |
| 2.9412 | 1150 | 0.0 | - |
| 3.0691 | 1200 | 0.0 | - |
| 3.1969 | 1250 | 0.0 | - |
| 3.3248 | 1300 | 0.0 | - |
| 3.4527 | 1350 | 0.0007 | - |
| 3.5806 | 1400 | 0.0005 | - |
| 3.7084 | 1450 | 0.0009 | - |
| 3.8363 | 1500 | 0.0008 | - |
| 3.9642 | 1550 | 0.0003 | - |
| 4.0921 | 1600 | 0.0002 | - |
| 4.2199 | 1650 | 0.0 | - |
| 4.3478 | 1700 | 0.0 | - |
| 4.4757 | 1750 | 0.0 | - |
| 4.6036 | 1800 | 0.0 | - |
| 4.7315 | 1850 | 0.0 | - |
| 4.8593 | 1900 | 0.0 | - |
| 4.9872 | 1950 | 0.0 | - |
| 5.1151 | 2000 | 0.0 | - |
| 5.2430 | 2050 | 0.0 | - |
| 5.3708 | 2100 | 0.0 | - |
| 5.4987 | 2150 | 0.0 | - |
| 5.6266 | 2200 | 0.0 | - |
| 5.7545 | 2250 | 0.0 | - |
| 5.8824 | 2300 | 0.0 | - |
| 6.0102 | 2350 | 0.0001 | - |
| 6.1381 | 2400 | 0.0006 | - |
| 6.2660 | 2450 | 0.0 | - |
| 6.3939 | 2500 | 0.0 | - |
| 6.5217 | 2550 | 0.0 | - |
| 6.6496 | 2600 | 0.0 | - |
| 6.7775 | 2650 | 0.0 | - |
| 6.9054 | 2700 | 0.0 | - |
| 7.0332 | 2750 | 0.0 | - |
| 7.1611 | 2800 | 0.0 | - |
| 7.2890 | 2850 | 0.0 | - |
| 7.4169 | 2900 | 0.0 | - |
| 7.5448 | 2950 | 0.0 | - |
| 7.6726 | 3000 | 0.0 | - |
| 7.8005 | 3050 | 0.0 | - |
| 7.9284 | 3100 | 0.0 | - |
| 8.0563 | 3150 | 0.0 | - |
| 8.1841 | 3200 | 0.0 | - |
| 8.3120 | 3250 | 0.0 | - |
| 8.4399 | 3300 | 0.0 | - |
| 8.5678 | 3350 | 0.0 | - |
| 8.6957 | 3400 | 0.0 | - |
| 8.8235 | 3450 | 0.0 | - |
| 8.9514 | 3500 | 0.0 | - |
| 9.0793 | 3550 | 0.0 | - |
| 9.2072 | 3600 | 0.0 | - |
| 9.3350 | 3650 | 0.0 | - |
| 9.4629 | 3700 | 0.0 | - |
| 9.5908 | 3750 | 0.0 | - |
| 9.7187 | 3800 | 0.0 | - |
| 9.8465 | 3850 | 0.0 | - |
| 9.9744 | 3900 | 0.0 | - |
| 10.1023 | 3950 | 0.0 | - |
| 10.2302 | 4000 | 0.0 | - |
| 10.3581 | 4050 | 0.0 | - |
| 10.4859 | 4100 | 0.0 | - |
| 10.6138 | 4150 | 0.0 | - |
| 10.7417 | 4200 | 0.0 | - |
| 10.8696 | 4250 | 0.0 | - |
| 10.9974 | 4300 | 0.0 | - |
| 11.1253 | 4350 | 0.0 | - |
| 11.2532 | 4400 | 0.0 | - |
| 11.3811 | 4450 | 0.0 | - |
| 11.5090 | 4500 | 0.0 | - |
| 11.6368 | 4550 | 0.0 | - |
| 11.7647 | 4600 | 0.0 | - |
| 11.8926 | 4650 | 0.0 | - |
| 12.0205 | 4700 | 0.0 | - |
| 12.1483 | 4750 | 0.0 | - |
| 12.2762 | 4800 | 0.0 | - |
| 12.4041 | 4850 | 0.0 | - |
| 12.5320 | 4900 | 0.0 | - |
| 12.6598 | 4950 | 0.0 | - |
| 12.7877 | 5000 | 0.0 | - |
| 12.9156 | 5050 | 0.0 | - |
| 13.0435 | 5100 | 0.0 | - |
| 13.1714 | 5150 | 0.0 | - |
| 13.2992 | 5200 | 0.0 | - |
| 13.4271 | 5250 | 0.0 | - |
| 13.5550 | 5300 | 0.0 | - |
| 13.6829 | 5350 | 0.0 | - |
| 13.8107 | 5400 | 0.0 | - |
| 13.9386 | 5450 | 0.0 | - |
| 14.0665 | 5500 | 0.0 | - |
| 14.1944 | 5550 | 0.0 | - |
| 14.3223 | 5600 | 0.0 | - |
| 14.4501 | 5650 | 0.0 | - |
| 14.5780 | 5700 | 0.0 | - |
| 14.7059 | 5750 | 0.0 | - |
| 14.8338 | 5800 | 0.0 | - |
| 14.9616 | 5850 | 0.0 | - |
| 15.0895 | 5900 | 0.0 | - |
| 15.2174 | 5950 | 0.0 | - |
| 15.3453 | 6000 | 0.0 | - |
| 15.4731 | 6050 | 0.0 | - |
| 15.6010 | 6100 | 0.0 | - |
| 15.7289 | 6150 | 0.0 | - |
| 15.8568 | 6200 | 0.0 | - |
| 15.9847 | 6250 | 0.0 | - |
| 16.1125 | 6300 | 0.0 | - |
| 16.2404 | 6350 | 0.0 | - |
| 16.3683 | 6400 | 0.0 | - |
| 16.4962 | 6450 | 0.0 | - |
| 16.6240 | 6500 | 0.0 | - |
| 16.7519 | 6550 | 0.0 | - |
| 16.8798 | 6600 | 0.0 | - |
| 17.0077 | 6650 | 0.0 | - |
| 17.1355 | 6700 | 0.0 | - |
| 17.2634 | 6750 | 0.0 | - |
| 17.3913 | 6800 | 0.0 | - |
| 17.5192 | 6850 | 0.0 | - |
| 17.6471 | 6900 | 0.0 | - |
| 17.7749 | 6950 | 0.0 | - |
| 17.9028 | 7000 | 0.0 | - |
| 18.0307 | 7050 | 0.0 | - |
| 18.1586 | 7100 | 0.0 | - |
| 18.2864 | 7150 | 0.0 | - |
| 18.4143 | 7200 | 0.0 | - |
| 18.5422 | 7250 | 0.0 | - |
| 18.6701 | 7300 | 0.0 | - |
| 18.7980 | 7350 | 0.0 | - |
| 18.9258 | 7400 | 0.0 | - |
| 19.0537 | 7450 | 0.0 | - |
| 19.1816 | 7500 | 0.0 | - |
| 19.3095 | 7550 | 0.0004 | - |
| 19.4373 | 7600 | 0.0028 | - |
| 19.5652 | 7650 | 0.0003 | - |
| 19.6931 | 7700 | 0.0002 | - |
| 19.8210 | 7750 | 0.0 | - |
| 19.9488 | 7800 | 0.0 | - |
| 20.0767 | 7850 | 0.0 | - |
| 20.2046 | 7900 | 0.0 | - |
| 20.3325 | 7950 | 0.0 | - |
| 20.4604 | 8000 | 0.0 | - |
| 20.5882 | 8050 | 0.0 | - |
| 20.7161 | 8100 | 0.0 | - |
| 20.8440 | 8150 | 0.0 | - |
| 20.9719 | 8200 | 0.0 | - |
| 21.0997 | 8250 | 0.0 | - |
| 21.2276 | 8300 | 0.0 | - |
| 21.3555 | 8350 | 0.0 | - |
| 21.4834 | 8400 | 0.0 | - |
| 21.6113 | 8450 | 0.0 | - |
| 21.7391 | 8500 | 0.0 | - |
| 21.8670 | 8550 | 0.0 | - |
| 21.9949 | 8600 | 0.0 | - |
| 22.1228 | 8650 | 0.0 | - |
| 22.2506 | 8700 | 0.0 | - |
| 22.3785 | 8750 | 0.0 | - |
| 22.5064 | 8800 | 0.0 | - |
| 22.6343 | 8850 | 0.0 | - |
| 22.7621 | 8900 | 0.0 | - |
| 22.8900 | 8950 | 0.0 | - |
| 23.0179 | 9000 | 0.0 | - |
| 23.1458 | 9050 | 0.0 | - |
| 23.2737 | 9100 | 0.0 | - |
| 23.4015 | 9150 | 0.0 | - |
| 23.5294 | 9200 | 0.0 | - |
| 23.6573 | 9250 | 0.0 | - |
| 23.7852 | 9300 | 0.0 | - |
| 23.9130 | 9350 | 0.0 | - |
| 24.0409 | 9400 | 0.0 | - |
| 24.1688 | 9450 | 0.0 | - |
| 24.2967 | 9500 | 0.0 | - |
| 24.4246 | 9550 | 0.0 | - |
| 24.5524 | 9600 | 0.0 | - |
| 24.6803 | 9650 | 0.0 | - |
| 24.8082 | 9700 | 0.0 | - |
| 24.9361 | 9750 | 0.0 | - |
| 25.0639 | 9800 | 0.0 | - |
| 25.1918 | 9850 | 0.0 | - |
| 25.3197 | 9900 | 0.0 | - |
| 25.4476 | 9950 | 0.0 | - |
| 25.5754 | 10000 | 0.0 | - |
| 25.7033 | 10050 | 0.0 | - |
| 25.8312 | 10100 | 0.0 | - |
| 25.9591 | 10150 | 0.0 | - |
| 26.0870 | 10200 | 0.0 | - |
| 26.2148 | 10250 | 0.0 | - |
| 26.3427 | 10300 | 0.0 | - |
| 26.4706 | 10350 | 0.0 | - |
| 26.5985 | 10400 | 0.0 | - |
| 26.7263 | 10450 | 0.0 | - |
| 26.8542 | 10500 | 0.0 | - |
| 26.9821 | 10550 | 0.0 | - |
| 27.1100 | 10600 | 0.0 | - |
| 27.2379 | 10650 | 0.0 | - |
| 27.3657 | 10700 | 0.0 | - |
| 27.4936 | 10750 | 0.0 | - |
| 27.6215 | 10800 | 0.0 | - |
| 27.7494 | 10850 | 0.0 | - |
| 27.8772 | 10900 | 0.0 | - |
| 28.0051 | 10950 | 0.0 | - |
| 28.1330 | 11000 | 0.0 | - |
| 28.2609 | 11050 | 0.0 | - |
| 28.3887 | 11100 | 0.0 | - |
| 28.5166 | 11150 | 0.0 | - |
| 28.6445 | 11200 | 0.0 | - |
| 28.7724 | 11250 | 0.0 | - |
| 28.9003 | 11300 | 0.0 | - |
| 29.0281 | 11350 | 0.0 | - |
| 29.1560 | 11400 | 0.0 | - |
| 29.2839 | 11450 | 0.0 | - |
| 29.4118 | 11500 | 0.0 | - |
| 29.5396 | 11550 | 0.0 | - |
| 29.6675 | 11600 | 0.0 | - |
| 29.7954 | 11650 | 0.0 | - |
| 29.9233 | 11700 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with klue/roberta-base
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [klue/roberta-base](https://huggingface.co/klue/roberta-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 5 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'에스쁘아 프로테일러 비글로우 스틱 파운데이션 13g 23호베이지 (#M)홈>화장품/미용>베이스메이크업>파운데이션>스틱형 Naverstore > 화장품/미용 > 베이스메이크업 > 파운데이션 > 스틱형'</li><li>'그라펜 에어커버 스틱 파운데이션 23호 베이지 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파운데이션 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파운데이션'</li><li>'바비 브라운 스킨 파운데이션 스틱-2.5 원 샌드 9g (#M)화장품/미용>베이스메이크업>파운데이션>크림형 Naverstore > 화장품/미용 > 베이스메이크업 > 파운데이션 > 크림형'</li></ul> |
| 1 | <ul><li>'정샘물 스킨 세팅 톤 코렉팅 베이스 40ml 글로잉 베이스 (#M)11st>메이크업>페이스메이크업>메이크업베이스 11st > 뷰티 > 메이크업 > 페이스메이크업 > 메이크업베이스'</li><li>'아이오페 퍼펙트 커버 메이크업베이스 35ml 2호 라이트퍼플 × 3개 (#M)쿠팡 홈>뷰티>메이크업>베이스 메이크업>베이스/프라이머 Coupang > 뷰티 > 메이크업 > 베이스 메이크업 > 베이스/프라이머'</li><li>'아이오페 퍼펙트 커버 베이스 35ml 2호-퍼플 (#M)홈>화장품/미용>베이스메이크업>메이크업베이스 Naverstore > 화장품/미용 > 베이스메이크업 > 메이크업베이스'</li></ul> |
| 0 | <ul><li>'헤라 글로우 래스팅 파운데이션 17C1 페탈 아이보리 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 베이스/프라이머 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 베이스/프라이머'</li><li>'[에스티 로더] 더블웨어 파운데이션 30ml SPF 10/PA++ (+프라이머 정품 ) 1W0 웜 포슬린 홈>기획 세트;홈>더블웨어;홈>더블 웨어;화장품/미용>베이스메이크업>파운데이션>리퀴드형;(#M)홈>전체상품 Naverstore > 베이스메이크업 > 파운데이션'</li><li>'에스쁘아 프로테일러 파운데이션 비 글로우 10ml 4호 베이지 × 1개 (#M)쿠팡 홈>뷰티>메이크업>베이스 메이크업>파운데이션 Coupang > 뷰티 > 로드샵 > 메이크업 > 베이스 메이크업 > 파운데이션'</li></ul> |
| 4 | <ul><li>'시세이도 스포츠 커버 파운데이션 20g S101 (#M)홈>화장품/미용>베이스메이크업>파운데이션>크림형 Naverstore > 화장품/미용 > 베이스메이크업 > 파운데이션 > 크림형'</li><li>'시세이도 스포츠 커버 파운데이션 20g S100 × 1개 Coupang > 뷰티 > 메이크업 > 베이스 메이크업 > 파운데이션;(#M)쿠팡 홈>뷰티>메이크업>베이스 메이크업>파운데이션 Coupang > 뷰티 > 메이크업 > 베이스 메이크업 > 파운데이션'</li><li>'에이지투웨니스 오리지날 샤이닝드롭 케이스+리필3개 (+커피쿠폰+폼20ml) 샤이닝드롭(화이트)23호케이스+리필3개_폼20ml (#M)화장품/미용>베이스메이크업>파운데이션>쿠션형 AD > Naverstore > 화장품/미용 > 베이스메이크업 > 파운데이션 > 크림형'</li></ul> |
| 3 | <ul><li>'매트 벨벳 스킨 컴팩트 스폰지 단품없음 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품'</li><li>'[BF적립] 엉크르 드 뽀 쿠션&리필 세트(+스탠딩 미러+5천LPOINT) 20호_15호 LOREAL > DepartmentLotteOn > 입생로랑 > Branded > 입생로랑 LOREAL > DepartmentLotteOn > 입생로랑 > Branded > 입생로랑'</li><li>'코튼 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9475 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_top_bt5_4")
# Run inference
preds = model("[시세이도] NEW 싱크로 스킨 래디언트 리프팅 파운데이션 SPF30/PA++++ 30ml 130 오팔 (#M)홈>메이크업>베이스메이크업 HMALL > 뷰티 > 메이크업 > 베이스메이크업")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 12 | 22.928 | 52 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 50 |
| 1 | 50 |
| 2 | 50 |
| 3 | 50 |
| 4 | 50 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 100
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-----:|:-------------:|:---------------:|
| 0.0026 | 1 | 0.521 | - |
| 0.1279 | 50 | 0.4636 | - |
| 0.2558 | 100 | 0.42 | - |
| 0.3836 | 150 | 0.292 | - |
| 0.5115 | 200 | 0.1539 | - |
| 0.6394 | 250 | 0.0626 | - |
| 0.7673 | 300 | 0.0343 | - |
| 0.8951 | 350 | 0.0071 | - |
| 1.0230 | 400 | 0.0023 | - |
| 1.1509 | 450 | 0.0005 | - |
| 1.2788 | 500 | 0.0006 | - |
| 1.4066 | 550 | 0.0003 | - |
| 1.5345 | 600 | 0.0002 | - |
| 1.6624 | 650 | 0.0001 | - |
| 1.7903 | 700 | 0.0002 | - |
| 1.9182 | 750 | 0.0006 | - |
| 2.0460 | 800 | 0.0002 | - |
| 2.1739 | 850 | 0.0001 | - |
| 2.3018 | 900 | 0.0 | - |
| 2.4297 | 950 | 0.0 | - |
| 2.5575 | 1000 | 0.0 | - |
| 2.6854 | 1050 | 0.0 | - |
| 2.8133 | 1100 | 0.0 | - |
| 2.9412 | 1150 | 0.0 | - |
| 3.0691 | 1200 | 0.0 | - |
| 3.1969 | 1250 | 0.0 | - |
| 3.3248 | 1300 | 0.0 | - |
| 3.4527 | 1350 | 0.0007 | - |
| 3.5806 | 1400 | 0.0005 | - |
| 3.7084 | 1450 | 0.0009 | - |
| 3.8363 | 1500 | 0.0008 | - |
| 3.9642 | 1550 | 0.0003 | - |
| 4.0921 | 1600 | 0.0002 | - |
| 4.2199 | 1650 | 0.0 | - |
| 4.3478 | 1700 | 0.0 | - |
| 4.4757 | 1750 | 0.0 | - |
| 4.6036 | 1800 | 0.0 | - |
| 4.7315 | 1850 | 0.0 | - |
| 4.8593 | 1900 | 0.0 | - |
| 4.9872 | 1950 | 0.0 | - |
| 5.1151 | 2000 | 0.0 | - |
| 5.2430 | 2050 | 0.0 | - |
| 5.3708 | 2100 | 0.0 | - |
| 5.4987 | 2150 | 0.0 | - |
| 5.6266 | 2200 | 0.0 | - |
| 5.7545 | 2250 | 0.0 | - |
| 5.8824 | 2300 | 0.0 | - |
| 6.0102 | 2350 | 0.0001 | - |
| 6.1381 | 2400 | 0.0006 | - |
| 6.2660 | 2450 | 0.0 | - |
| 6.3939 | 2500 | 0.0 | - |
| 6.5217 | 2550 | 0.0 | - |
| 6.6496 | 2600 | 0.0 | - |
| 6.7775 | 2650 | 0.0 | - |
| 6.9054 | 2700 | 0.0 | - |
| 7.0332 | 2750 | 0.0 | - |
| 7.1611 | 2800 | 0.0 | - |
| 7.2890 | 2850 | 0.0 | - |
| 7.4169 | 2900 | 0.0 | - |
| 7.5448 | 2950 | 0.0 | - |
| 7.6726 | 3000 | 0.0 | - |
| 7.8005 | 3050 | 0.0 | - |
| 7.9284 | 3100 | 0.0 | - |
| 8.0563 | 3150 | 0.0 | - |
| 8.1841 | 3200 | 0.0 | - |
| 8.3120 | 3250 | 0.0 | - |
| 8.4399 | 3300 | 0.0 | - |
| 8.5678 | 3350 | 0.0 | - |
| 8.6957 | 3400 | 0.0 | - |
| 8.8235 | 3450 | 0.0 | - |
| 8.9514 | 3500 | 0.0 | - |
| 9.0793 | 3550 | 0.0 | - |
| 9.2072 | 3600 | 0.0 | - |
| 9.3350 | 3650 | 0.0 | - |
| 9.4629 | 3700 | 0.0 | - |
| 9.5908 | 3750 | 0.0 | - |
| 9.7187 | 3800 | 0.0 | - |
| 9.8465 | 3850 | 0.0 | - |
| 9.9744 | 3900 | 0.0 | - |
| 10.1023 | 3950 | 0.0 | - |
| 10.2302 | 4000 | 0.0 | - |
| 10.3581 | 4050 | 0.0 | - |
| 10.4859 | 4100 | 0.0 | - |
| 10.6138 | 4150 | 0.0 | - |
| 10.7417 | 4200 | 0.0 | - |
| 10.8696 | 4250 | 0.0 | - |
| 10.9974 | 4300 | 0.0 | - |
| 11.1253 | 4350 | 0.0 | - |
| 11.2532 | 4400 | 0.0 | - |
| 11.3811 | 4450 | 0.0 | - |
| 11.5090 | 4500 | 0.0 | - |
| 11.6368 | 4550 | 0.0 | - |
| 11.7647 | 4600 | 0.0 | - |
| 11.8926 | 4650 | 0.0 | - |
| 12.0205 | 4700 | 0.0 | - |
| 12.1483 | 4750 | 0.0 | - |
| 12.2762 | 4800 | 0.0 | - |
| 12.4041 | 4850 | 0.0 | - |
| 12.5320 | 4900 | 0.0 | - |
| 12.6598 | 4950 | 0.0 | - |
| 12.7877 | 5000 | 0.0 | - |
| 12.9156 | 5050 | 0.0 | - |
| 13.0435 | 5100 | 0.0 | - |
| 13.1714 | 5150 | 0.0 | - |
| 13.2992 | 5200 | 0.0 | - |
| 13.4271 | 5250 | 0.0 | - |
| 13.5550 | 5300 | 0.0 | - |
| 13.6829 | 5350 | 0.0 | - |
| 13.8107 | 5400 | 0.0 | - |
| 13.9386 | 5450 | 0.0 | - |
| 14.0665 | 5500 | 0.0 | - |
| 14.1944 | 5550 | 0.0 | - |
| 14.3223 | 5600 | 0.0 | - |
| 14.4501 | 5650 | 0.0 | - |
| 14.5780 | 5700 | 0.0 | - |
| 14.7059 | 5750 | 0.0 | - |
| 14.8338 | 5800 | 0.0 | - |
| 14.9616 | 5850 | 0.0 | - |
| 15.0895 | 5900 | 0.0 | - |
| 15.2174 | 5950 | 0.0 | - |
| 15.3453 | 6000 | 0.0 | - |
| 15.4731 | 6050 | 0.0 | - |
| 15.6010 | 6100 | 0.0 | - |
| 15.7289 | 6150 | 0.0 | - |
| 15.8568 | 6200 | 0.0 | - |
| 15.9847 | 6250 | 0.0 | - |
| 16.1125 | 6300 | 0.0 | - |
| 16.2404 | 6350 | 0.0 | - |
| 16.3683 | 6400 | 0.0 | - |
| 16.4962 | 6450 | 0.0 | - |
| 16.6240 | 6500 | 0.0 | - |
| 16.7519 | 6550 | 0.0 | - |
| 16.8798 | 6600 | 0.0 | - |
| 17.0077 | 6650 | 0.0 | - |
| 17.1355 | 6700 | 0.0 | - |
| 17.2634 | 6750 | 0.0 | - |
| 17.3913 | 6800 | 0.0 | - |
| 17.5192 | 6850 | 0.0 | - |
| 17.6471 | 6900 | 0.0 | - |
| 17.7749 | 6950 | 0.0 | - |
| 17.9028 | 7000 | 0.0 | - |
| 18.0307 | 7050 | 0.0 | - |
| 18.1586 | 7100 | 0.0 | - |
| 18.2864 | 7150 | 0.0 | - |
| 18.4143 | 7200 | 0.0 | - |
| 18.5422 | 7250 | 0.0 | - |
| 18.6701 | 7300 | 0.0 | - |
| 18.7980 | 7350 | 0.0 | - |
| 18.9258 | 7400 | 0.0 | - |
| 19.0537 | 7450 | 0.0 | - |
| 19.1816 | 7500 | 0.0 | - |
| 19.3095 | 7550 | 0.0004 | - |
| 19.4373 | 7600 | 0.0028 | - |
| 19.5652 | 7650 | 0.0003 | - |
| 19.6931 | 7700 | 0.0002 | - |
| 19.8210 | 7750 | 0.0 | - |
| 19.9488 | 7800 | 0.0 | - |
| 20.0767 | 7850 | 0.0 | - |
| 20.2046 | 7900 | 0.0 | - |
| 20.3325 | 7950 | 0.0 | - |
| 20.4604 | 8000 | 0.0 | - |
| 20.5882 | 8050 | 0.0 | - |
| 20.7161 | 8100 | 0.0 | - |
| 20.8440 | 8150 | 0.0 | - |
| 20.9719 | 8200 | 0.0 | - |
| 21.0997 | 8250 | 0.0 | - |
| 21.2276 | 8300 | 0.0 | - |
| 21.3555 | 8350 | 0.0 | - |
| 21.4834 | 8400 | 0.0 | - |
| 21.6113 | 8450 | 0.0 | - |
| 21.7391 | 8500 | 0.0 | - |
| 21.8670 | 8550 | 0.0 | - |
| 21.9949 | 8600 | 0.0 | - |
| 22.1228 | 8650 | 0.0 | - |
| 22.2506 | 8700 | 0.0 | - |
| 22.3785 | 8750 | 0.0 | - |
| 22.5064 | 8800 | 0.0 | - |
| 22.6343 | 8850 | 0.0 | - |
| 22.7621 | 8900 | 0.0 | - |
| 22.8900 | 8950 | 0.0 | - |
| 23.0179 | 9000 | 0.0 | - |
| 23.1458 | 9050 | 0.0 | - |
| 23.2737 | 9100 | 0.0 | - |
| 23.4015 | 9150 | 0.0 | - |
| 23.5294 | 9200 | 0.0 | - |
| 23.6573 | 9250 | 0.0 | - |
| 23.7852 | 9300 | 0.0 | - |
| 23.9130 | 9350 | 0.0 | - |
| 24.0409 | 9400 | 0.0 | - |
| 24.1688 | 9450 | 0.0 | - |
| 24.2967 | 9500 | 0.0 | - |
| 24.4246 | 9550 | 0.0 | - |
| 24.5524 | 9600 | 0.0 | - |
| 24.6803 | 9650 | 0.0 | - |
| 24.8082 | 9700 | 0.0 | - |
| 24.9361 | 9750 | 0.0 | - |
| 25.0639 | 9800 | 0.0 | - |
| 25.1918 | 9850 | 0.0 | - |
| 25.3197 | 9900 | 0.0 | - |
| 25.4476 | 9950 | 0.0 | - |
| 25.5754 | 10000 | 0.0 | - |
| 25.7033 | 10050 | 0.0 | - |
| 25.8312 | 10100 | 0.0 | - |
| 25.9591 | 10150 | 0.0 | - |
| 26.0870 | 10200 | 0.0 | - |
| 26.2148 | 10250 | 0.0 | - |
| 26.3427 | 10300 | 0.0 | - |
| 26.4706 | 10350 | 0.0 | - |
| 26.5985 | 10400 | 0.0 | - |
| 26.7263 | 10450 | 0.0 | - |
| 26.8542 | 10500 | 0.0 | - |
| 26.9821 | 10550 | 0.0 | - |
| 27.1100 | 10600 | 0.0 | - |
| 27.2379 | 10650 | 0.0 | - |
| 27.3657 | 10700 | 0.0 | - |
| 27.4936 | 10750 | 0.0 | - |
| 27.6215 | 10800 | 0.0 | - |
| 27.7494 | 10850 | 0.0 | - |
| 27.8772 | 10900 | 0.0 | - |
| 28.0051 | 10950 | 0.0 | - |
| 28.1330 | 11000 | 0.0 | - |
| 28.2609 | 11050 | 0.0 | - |
| 28.3887 | 11100 | 0.0 | - |
| 28.5166 | 11150 | 0.0 | - |
| 28.6445 | 11200 | 0.0 | - |
| 28.7724 | 11250 | 0.0 | - |
| 28.9003 | 11300 | 0.0 | - |
| 29.0281 | 11350 | 0.0 | - |
| 29.1560 | 11400 | 0.0 | - |
| 29.2839 | 11450 | 0.0 | - |
| 29.4118 | 11500 | 0.0 | - |
| 29.5396 | 11550 | 0.0 | - |
| 29.6675 | 11600 | 0.0 | - |
| 29.7954 | 11650 | 0.0 | - |
| 29.9233 | 11700 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "klue/roberta-base", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "[시세이도] NEW 싱크로 스킨 래디언트 리프팅 파운데이션 SPF30/PA++++ 30ml 130 오팔 (#M)홈>메이크업>베이스메이크업 HMALL > 뷰티 > 메이크업 > 베이스메이크업"}, {"text": "어뮤즈 메타 픽싱 비건 쿠션 리필 (3종 택 1) 02 누드 (#M)홈>화장품/미용>베이스메이크업>파운데이션>쿠션형 Naverstore > 화장품/미용 > 베이스메이크업 > 파운데이션 > 쿠션형"}, {"text": "에스쁘아 프로 테일러 파운데이션 비 글로우 30ml MinSellAmount (#M)화장품/향수>베이스메이크업>파운데이션 Gmarket > 뷰티 > 화장품/향수 > 베이스메이크업 > 파운데이션"}, {"text": "(현대백화점) 톰 포드 뷰티 셰이드 앤 일루미네이트 소프트 래디언스 파운데이션 SPF50/PA++++ 0.4 로즈 (#M)화장품/향수>베이스메이크업>파운데이션 Gmarket > 뷰티 > 화장품/향수 > 베이스메이크업 > 파운데이션"}, {"text": "[정샘물] 마스터클래스 래디언트 쿠션(리필포함)(+코렉팅 베이스5mlx3개)(강남점) N1아이보리 (#M)11st>메이크업>페이스메이크업>파운데이션 11st > 뷰티 > 메이크업 > 페이스메이크업 > 파운데이션"}], "inference": true, "model-index": [{"name": "SetFit with klue/roberta-base", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9475307038057129, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,868 |
csocsci/mt5-base-multi-label-cs-iiib-02c
|
csocsci
|
text2text-generation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"cs",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-22T13:29:45Z |
2023-09-23T13:40:51+00:00
| 10 | 0 |
---
language:
- cs
license: mit
---
# Model Card for mt5-base-multi-label-cs-iiib-02c
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech.
## Model Description
The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support'. The inputs of the model are: a target utterance and its bi-directional context; the label of the example is determined by the label of the target utterance.
- **Developed by:** Anonymous
- **Language(s):** multilingual
- **Finetuned from:** mt5-base
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/chi2024submission
- **Paper:** Stay tuned!
## Usage
Here is how to use this model to classify a context-window of a dialogue:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
test_texts = ['Utterance2']
test_text_pairs = ['Utterance1;Utterance2;Utterance3']
checkpoint_path = "chi2024/mt5-base-multi-label-cs-iiib-02c"
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\
.to("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(checkpoint_path)
def verbalize_input(text: str, text_pair: str) -> str:
return "Utterance: %s\nContext: %s" % (text, text_pair)
def predict_one(text, pair):
input_pair = verbalize_input(text, pair)
inputs = tokenizer(input_pair, return_tensors="pt", padding=True,
truncation=True, max_length=256).to(model.device)
outputs = model.generate(**inputs)
decoded = [text.split(",")[0].strip() for text in
tokenizer.batch_decode(outputs, skip_special_tokens=True)]
return decoded
dec = predict_one(test_texts[0], test_text_pairs[0])
print(dec)
```
| null |
Non_BioNLP
|
# Model Card for mt5-base-multi-label-cs-iiib-02c
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech.
## Model Description
The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support'. The inputs of the model are: a target utterance and its bi-directional context; the label of the example is determined by the label of the target utterance.
- **Developed by:** Anonymous
- **Language(s):** multilingual
- **Finetuned from:** mt5-base
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/chi2024submission
- **Paper:** Stay tuned!
## Usage
Here is how to use this model to classify a context-window of a dialogue:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
test_texts = ['Utterance2']
test_text_pairs = ['Utterance1;Utterance2;Utterance3']
checkpoint_path = "chi2024/mt5-base-multi-label-cs-iiib-02c"
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\
.to("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(checkpoint_path)
def verbalize_input(text: str, text_pair: str) -> str:
return "Utterance: %s\nContext: %s" % (text, text_pair)
def predict_one(text, pair):
input_pair = verbalize_input(text, pair)
inputs = tokenizer(input_pair, return_tensors="pt", padding=True,
truncation=True, max_length=256).to(model.device)
outputs = model.generate(**inputs)
decoded = [text.split(",")[0].strip() for text in
tokenizer.batch_decode(outputs, skip_special_tokens=True)]
return decoded
dec = predict_one(test_texts[0], test_text_pairs[0])
print(dec)
```
|
{"language": ["cs"], "license": "mit"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 46,869 |
heegyu/TinyLlama-augesc-context-strategy
|
heegyu
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:thu-coai/augesc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-03-01T16:19:26Z |
2024-03-07T13:19:42+00:00
| 8 | 0 |
---
datasets:
- thu-coai/augesc
library_name: transformers
---
Test set performance
- Top 1 Accuracy: 0.4346
- Top 3 Accuracy: 0.7677
- Top 1 Macro F1: 0.2668
- Top 3 Macro F1: 0.5669
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
device="cuda:0"
model = "heegyu/TinyLlama-augesc-context-strategy"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForSequenceClassification.from_pretrained(model).eval().to(device)
example = """usr: Hi
sys[Question]: Hello, how are you today?
usr: I was scolded by my parents yesterday"""
inputs = tokenizer(example, return_tensors="pt").to(device)
logits = model(**inputs).logits.softmax(-1)
print(logits)
label = logits.argmax(-1).item()
ESCONV_STRATEGY = [
"Question",
"Restatement or Paraphrasing",
"Reflection of feelings",
"Self-disclosure",
"Affirmation and Reassurance",
"Providing Suggestions",
"Information",
"Others"
]
id2label = {i:k for i, k in enumerate(ESCONV_STRATEGY)}
print(id2label[label])
```
| null |
Non_BioNLP
|
Test set performance
- Top 1 Accuracy: 0.4346
- Top 3 Accuracy: 0.7677
- Top 1 Macro F1: 0.2668
- Top 3 Macro F1: 0.5669
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
device="cuda:0"
model = "heegyu/TinyLlama-augesc-context-strategy"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForSequenceClassification.from_pretrained(model).eval().to(device)
example = """usr: Hi
sys[Question]: Hello, how are you today?
usr: I was scolded by my parents yesterday"""
inputs = tokenizer(example, return_tensors="pt").to(device)
logits = model(**inputs).logits.softmax(-1)
print(logits)
label = logits.argmax(-1).item()
ESCONV_STRATEGY = [
"Question",
"Restatement or Paraphrasing",
"Reflection of feelings",
"Self-disclosure",
"Affirmation and Reassurance",
"Providing Suggestions",
"Information",
"Others"
]
id2label = {i:k for i, k in enumerate(ESCONV_STRATEGY)}
print(id2label[label])
```
|
{"datasets": ["thu-coai/augesc"], "library_name": "transformers"}
|
task
|
[
"PARAPHRASING"
] | 46,870 |
Bahasalab/BahasaGpt-chat
|
Bahasalab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"license:cc-by-nc-3.0",
"endpoints_compatible",
"region:us"
] | 2023-04-09T13:44:42Z |
2023-04-11T07:23:12+00:00
| 18 | 2 |
---
license: cc-by-nc-3.0
---
# BahasaGPT-Chat
## Introduction
This document provides an overview of the BahasaGPT-Chat model, which is a fine-tuned model for a specific task in the Indonesian language. The model is based on the Bloomz-7B-mt architecture and is fine-tuned using a dataset of over 120000 Chat instructions based.
## Model Details
**Model Name:** BahasaGPT-Chat
**Model Source:** Bloomz-7B-mt
**Dataset for Fine-Tuning:** Over 120k Indonesia Instruct Dataset generated using the Alpaca method from the following sources:
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot)
- Translated instructions from OA ([Anh/data at main · LAION-AI/Anh](https://github.com/LAION-AI/Anh))
## Fine-Tuning Process
The BahasaGPT-1 model was fine-tuned using a dataset of over 120k Indonesian instructions, which were generated using [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot) method with addition alpaca and OA Translation dataset. This combination of datasets allowed the model to be better adapted to the specific needs of Indonesian language tasks.
The fine-tuning process involved adjusting the model's weights and biases based on the input dataset. This was done iteratively to optimize the model's performance for the specific task in the Indonesian language.
## Known Limitations
Despite the successful fine-tuning, the BahasaGPT-1 model still has some limitations:
**Hallucination:** The model sometimes generates outputs that may seem plausible but are not based on the input data. This may lead to incorrect or nonsensical responses in some cases.
**Bias:** The BahasaGPT-1 model, like other AI language models, can exhibit various forms of bias due to the data it was trained on. This includes, but is not limited to, gender, racial, and cultural biases. As a result, the model may generate outputs that perpetuate stereotypes, exhibit unfair treatment, or show preference for specific groups or perspectives. Efforts have been made to mitigate these biases, but they may still be present in the model's responses.
## Conclusion
The BahasaGPT-1 model is a fine-tuned language model for Indonesian language tasks, based on the Bloomz-7B-mt architecture. The model was trained on a dataset of over 120k Indonesian instructions generated using using [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot) method with addition alpaca and OA Translation dataset. Despite some limitations, such as occasional hallucination, the model provides a valuable tool for working with Indonesian language tasks.
## How to Run
For Gradio Demo : [Gradio Code](https://github.com/acul3/Bahasa_Chat)
For Colab Using (Int8) : [Colab](https://colab.research.google.com/drive/1yvhJENcd0NKuMZNipAJVP4eP-k7-ilXj?usp=sharing)
| null |
Non_BioNLP
|
# BahasaGPT-Chat
## Introduction
This document provides an overview of the BahasaGPT-Chat model, which is a fine-tuned model for a specific task in the Indonesian language. The model is based on the Bloomz-7B-mt architecture and is fine-tuned using a dataset of over 120000 Chat instructions based.
## Model Details
**Model Name:** BahasaGPT-Chat
**Model Source:** Bloomz-7B-mt
**Dataset for Fine-Tuning:** Over 120k Indonesia Instruct Dataset generated using the Alpaca method from the following sources:
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot)
- Translated instructions from OA ([Anh/data at main · LAION-AI/Anh](https://github.com/LAION-AI/Anh))
## Fine-Tuning Process
The BahasaGPT-1 model was fine-tuned using a dataset of over 120k Indonesian instructions, which were generated using [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot) method with addition alpaca and OA Translation dataset. This combination of datasets allowed the model to be better adapted to the specific needs of Indonesian language tasks.
The fine-tuning process involved adjusting the model's weights and biases based on the input dataset. This was done iteratively to optimize the model's performance for the specific task in the Indonesian language.
## Known Limitations
Despite the successful fine-tuning, the BahasaGPT-1 model still has some limitations:
**Hallucination:** The model sometimes generates outputs that may seem plausible but are not based on the input data. This may lead to incorrect or nonsensical responses in some cases.
**Bias:** The BahasaGPT-1 model, like other AI language models, can exhibit various forms of bias due to the data it was trained on. This includes, but is not limited to, gender, racial, and cultural biases. As a result, the model may generate outputs that perpetuate stereotypes, exhibit unfair treatment, or show preference for specific groups or perspectives. Efforts have been made to mitigate these biases, but they may still be present in the model's responses.
## Conclusion
The BahasaGPT-1 model is a fine-tuned language model for Indonesian language tasks, based on the Bloomz-7B-mt architecture. The model was trained on a dataset of over 120k Indonesian instructions generated using using [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot) method with addition alpaca and OA Translation dataset. Despite some limitations, such as occasional hallucination, the model provides a valuable tool for working with Indonesian language tasks.
## How to Run
For Gradio Demo : [Gradio Code](https://github.com/acul3/Bahasa_Chat)
For Colab Using (Int8) : [Colab](https://colab.research.google.com/drive/1yvhJENcd0NKuMZNipAJVP4eP-k7-ilXj?usp=sharing)
|
{"license": "cc-by-nc-3.0"}
|
task
|
[
"TRANSLATION"
] | 46,871 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.