id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
42 values
tags
listlengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
stringdate
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_bigbio_names
listlengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
source
stringclasses
2 values
matched_task
listlengths
1
10
__index_level_0__
int64
0
46.9k
TheBloke/minotaur-mpt-7B-GGML
TheBloke
null
[ "transformers", "mpt", "arxiv:2108.12409", "arxiv:2302.13971", "arxiv:2205.14135", "arxiv:2010.04245", "arxiv:1909.08053", "arxiv:2302.06675", "license:other", "region:us" ]
2023-06-08T19:22:22Z
2023-06-08T21:18:35+00:00
9
3
--- license: other inference: false --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # OpenAccess AI Collective's Minotaur MPT 7B GGML These files are GGML format model files for [OpenAccess AI Collective's Minotaur MPT 7B](https://huggingface.co/openaccess-ai-collective/minotaur-mpt-7b). Please note that these MPT GGMLs are **not compatbile with llama.cpp**. Please see below for a list of tools known to work with these model files. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/none) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/minotaur-mpt-7B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-mpt-7b) ## Prompt template Please see the "Original Model Card" below for more details about how to prompt this model. ``` USER: prompt goes here ASSISTANT: ``` <!-- compatibility_ggml start --> ## Compatibilty These files are **not** compatible with llama.cpp. Currently they can be used with: * KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp) * The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers) * The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui) * [rustformers' llm](https://github.com/rustformers/llm) * The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml) As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!) ## Tutorial for using GPT4All-UI * [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888) * [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI) <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | minotaur-mpt-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.74 GB | 6.24 GB | Original llama.cpp quant method, 4-bit. | | minotaur-mpt-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.16 GB | 6.66 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | minotaur-mpt-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.57 GB | 7.07 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | minotaur-mpt-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 4.99 GB | 7.49 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | minotaur-mpt-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.07 GB | 9.57 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: OpenAccess AI Collective's Minotaur MPT 7B [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!** # Minotaur MPT 7B Minotaur 7B is an instruct fine-tuned model on top of MPT-7B. Minotaur 7B is fine-tuned **on only completely open datasets** making this model reproducible by anyone. Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected]) # Prompts Chat only style prompts using `USER:`,`ASSISTANT:`. # Training Datasets Minotaur 7B model is fine-tuned on the following datasets: - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization - [camel-ai/math](https://huggingface.co/datasets/camel-ai/math) - [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics) - [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry) - [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology) - [winglian/evals](https://huggingface.co/datasets/winglian/evals) - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented # Shoutouts Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all. # Demo HF Demo in Spaces coming soon. ## Release Notes - https://wandb.ai/wing-lian/mpt-7b-4k-minotaur/runs/i4zib0j4 ## Build Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 7xA100 80GB - 3 epochs taking approximately 6 hours ## Bias, Risks, and Limitations Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Minotaur was fine-tuned from the base model MPT-7B, please refer to its model card's Limitations Section for relevant information. (included below) ## Examples - results may vary based on temperature and other settings ``` USER: your have 3 apples. you eat 2 pears. how many apples do you have left? ASSISTANT:To find the quantity of apples, we can subtract the number of pears from the number of apples. In this case, there are 3 apples and 2 pears. This means there are 3 apples left. ``` wut? ``` USER: your have 3 apples. you eat 2 pears. how many apples do you have left? ASSISTANT:You don't have any apples left. You had 3 and after eating 2 pears, it is 2. You ate the other apple. ``` 🤣 ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: your have 3 apples. you eat 2 pears. how many apples do you have left? Use multi-step reasoning. ASSISTANT: I understand. You have three apples, two pears, and you ate two of the pears. You have three apples left. ``` 👌 looks like you need the system prompt ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Solve for x: 5x = 3x + 10 ASSISTANT: To find the value of x, you can solve this equation by first adding both sides of the equation, as follows: 5x = 3x + 10 Now, simplify the equation to get: x = 2.5 ``` yup, it still sucks at math ``` USER: tell me a joke. then explain why it's funny. ASSISTANT: What did the blind man say to the man eating steak? He said, "Is that a cow or a sandwich?" ``` ``` USER: write a sonnet about doves ASSISTANT:Doves are a beautiful bird, and you can write a sonnet about them to express your love for them. This sonnet can include imagery of doves, including the sound of their wings as they fly, the softness of their feathers, and the peacefulness they bring to their surroundings. The structure of this sonnet can be similar to the Shakespearean sonnets, with 14 lines divided into three groups of rhyming couplets. In the first stanza, you can express your love for doves as they are a symbol of peace and tranquility. The second stanza can describe the beauty of the dove's plumage and the softness of their feathers. The third stanza can be about the peacefulness that they bring to their surroundings, such as in a garden or park. You can end the sonnet with a final line that encapsulates your feelings about doves. Here is the sonnet for your reference: > Doves are a symbol of peace and tranquility, > A gentle song that soothes the soul. > Their soft feathers and sweet voices bring a sense of warmth and calm to their surroundings, > And they are a welcome sight in any landscape. > > Their grace and beauty are a reminder of the simple pleasures in life, > And their peaceful presence is a welcome respite from the stresses of everyday life. > > Doves are a symbol of love and friendship, > And their companionship is a gift we can all cherish. > > Let us embrace the beauty and serenity of doves, > And let us cherish the gift of their presence in our lives. > > May we always find beauty and serenity in the simple pleasures of life, > And may we cherish the gift of friendship and love. > > Doves are a symbol of peace and tranquility, > A gentle song that soothes the soul. > Their soft feathers and sweet voices bring a sense of warmth and calm to their surroundings, > And they are a welcome sight in any landscape. > > Their grace and beauty are a reminder of the simple pleasures in life, > And their peaceful presence is a welcome respite from the stresses of everyday life. > > Doves are a symbol of love and friendship, > And their companionship is a gift we can all cherish. > > Let us embrace the beauty and serenity of doves, > And let us cherish the gift of their presence in our lives. > > May we always find beauty and serenity in the simple pleasures of life, > And may we cherish the gift of friendship and love. ``` # MPT-7B MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-7B is * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-7B: The following models are finetuned on MPT-7B: * [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths. Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3). At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b). * License: Apache 2.0 * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following. Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) * [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat) ## Model Date May 5, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-7b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-7b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b') ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 | | C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 | | The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 | | RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 | | S2ORC | 48.85 B | 0.033 | 33 B | 0.68 | | RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 | | RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points. ### Training Configuration This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
null
Non_BioNLP
<!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # OpenAccess AI Collective's Minotaur MPT 7B GGML These files are GGML format model files for [OpenAccess AI Collective's Minotaur MPT 7B](https://huggingface.co/openaccess-ai-collective/minotaur-mpt-7b). Please note that these MPT GGMLs are **not compatbile with llama.cpp**. Please see below for a list of tools known to work with these model files. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/none) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/minotaur-mpt-7B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-mpt-7b) ## Prompt template Please see the "Original Model Card" below for more details about how to prompt this model. ``` USER: prompt goes here ASSISTANT: ``` <!-- compatibility_ggml start --> ## Compatibilty These files are **not** compatible with llama.cpp. Currently they can be used with: * KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp) * The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers) * The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui) * [rustformers' llm](https://github.com/rustformers/llm) * The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml) As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!) ## Tutorial for using GPT4All-UI * [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888) * [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI) <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | minotaur-mpt-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.74 GB | 6.24 GB | Original llama.cpp quant method, 4-bit. | | minotaur-mpt-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.16 GB | 6.66 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | minotaur-mpt-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.57 GB | 7.07 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | minotaur-mpt-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 4.99 GB | 7.49 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | minotaur-mpt-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.07 GB | 9.57 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: OpenAccess AI Collective's Minotaur MPT 7B [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!** # Minotaur MPT 7B Minotaur 7B is an instruct fine-tuned model on top of MPT-7B. Minotaur 7B is fine-tuned **on only completely open datasets** making this model reproducible by anyone. Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected]) # Prompts Chat only style prompts using `USER:`,`ASSISTANT:`. # Training Datasets Minotaur 7B model is fine-tuned on the following datasets: - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization - [camel-ai/math](https://huggingface.co/datasets/camel-ai/math) - [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics) - [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry) - [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology) - [winglian/evals](https://huggingface.co/datasets/winglian/evals) - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented # Shoutouts Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all. # Demo HF Demo in Spaces coming soon. ## Release Notes - https://wandb.ai/wing-lian/mpt-7b-4k-minotaur/runs/i4zib0j4 ## Build Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 7xA100 80GB - 3 epochs taking approximately 6 hours ## Bias, Risks, and Limitations Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Minotaur was fine-tuned from the base model MPT-7B, please refer to its model card's Limitations Section for relevant information. (included below) ## Examples - results may vary based on temperature and other settings ``` USER: your have 3 apples. you eat 2 pears. how many apples do you have left? ASSISTANT:To find the quantity of apples, we can subtract the number of pears from the number of apples. In this case, there are 3 apples and 2 pears. This means there are 3 apples left. ``` wut? ``` USER: your have 3 apples. you eat 2 pears. how many apples do you have left? ASSISTANT:You don't have any apples left. You had 3 and after eating 2 pears, it is 2. You ate the other apple. ``` 🤣 ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: your have 3 apples. you eat 2 pears. how many apples do you have left? Use multi-step reasoning. ASSISTANT: I understand. You have three apples, two pears, and you ate two of the pears. You have three apples left. ``` 👌 looks like you need the system prompt ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Solve for x: 5x = 3x + 10 ASSISTANT: To find the value of x, you can solve this equation by first adding both sides of the equation, as follows: 5x = 3x + 10 Now, simplify the equation to get: x = 2.5 ``` yup, it still sucks at math ``` USER: tell me a joke. then explain why it's funny. ASSISTANT: What did the blind man say to the man eating steak? He said, "Is that a cow or a sandwich?" ``` ``` USER: write a sonnet about doves ASSISTANT:Doves are a beautiful bird, and you can write a sonnet about them to express your love for them. This sonnet can include imagery of doves, including the sound of their wings as they fly, the softness of their feathers, and the peacefulness they bring to their surroundings. The structure of this sonnet can be similar to the Shakespearean sonnets, with 14 lines divided into three groups of rhyming couplets. In the first stanza, you can express your love for doves as they are a symbol of peace and tranquility. The second stanza can describe the beauty of the dove's plumage and the softness of their feathers. The third stanza can be about the peacefulness that they bring to their surroundings, such as in a garden or park. You can end the sonnet with a final line that encapsulates your feelings about doves. Here is the sonnet for your reference: > Doves are a symbol of peace and tranquility, > A gentle song that soothes the soul. > Their soft feathers and sweet voices bring a sense of warmth and calm to their surroundings, > And they are a welcome sight in any landscape. > > Their grace and beauty are a reminder of the simple pleasures in life, > And their peaceful presence is a welcome respite from the stresses of everyday life. > > Doves are a symbol of love and friendship, > And their companionship is a gift we can all cherish. > > Let us embrace the beauty and serenity of doves, > And let us cherish the gift of their presence in our lives. > > May we always find beauty and serenity in the simple pleasures of life, > And may we cherish the gift of friendship and love. > > Doves are a symbol of peace and tranquility, > A gentle song that soothes the soul. > Their soft feathers and sweet voices bring a sense of warmth and calm to their surroundings, > And they are a welcome sight in any landscape. > > Their grace and beauty are a reminder of the simple pleasures in life, > And their peaceful presence is a welcome respite from the stresses of everyday life. > > Doves are a symbol of love and friendship, > And their companionship is a gift we can all cherish. > > Let us embrace the beauty and serenity of doves, > And let us cherish the gift of their presence in our lives. > > May we always find beauty and serenity in the simple pleasures of life, > And may we cherish the gift of friendship and love. ``` # MPT-7B MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-7B is * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-7B: The following models are finetuned on MPT-7B: * [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths. Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3). At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b). * License: Apache 2.0 * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following. Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) * [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat) ## Model Date May 5, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-7b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-7b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b') ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 | | C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 | | The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 | | RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 | | S2ORC | 48.85 B | 0.033 | 33 B | 0.68 | | RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 | | RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points. ### Training Configuration This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
{"license": "other", "inference": false}
task
[ "SUMMARIZATION" ]
42,958
dat-ai/bge-base-for_text2sql
dat-ai
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:56355", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-11-13T08:53:38Z
2024-11-13T08:53:54+00:00
6
0
--- base_model: BAAI/bge-base-en-v1.5 language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:56355 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: "\n Given the Column informations, generate an SQL query for\ \ the following question:\n Column: Finishing position | Points awarded (Platinum)\ \ | Points awarded (Gold) | Points awarded (Silver) | Points awarded (Satellite)\n\ \ Question: How many platinum points were awarded when 6 gold points were awarded?\n\ \ SQL Query: SELECT MAX Points awarded (Platinum) FROM table WHERE Points awarded\ \ (Gold) = 6\n " sentences: - How many platinum points were awarded when 6 gold points were awarded? - Did any team score games that totaled up to 860.5? - Who had the pole position at the German Grand Prix? - source_sentence: "\n Given the Column informations, generate an SQL query for\ \ the following question:\n Column: Player | No. | Nationality | Position | Years\ \ in Toronto | School/Club Team\n Question: What's Dell Curry nationality?\n\ \ SQL Query: SELECT Nationality FROM table WHERE Player = Dell Curry\n " sentences: - What is the title when original air date is may15,2008? - What's Dell Curry nationality? - What's the minimum total attendance of the Premier League association football? - source_sentence: "\n Given the Column informations, generate an SQL query for\ \ the following question:\n Column: Sepal length | Sepal width | Petal length\ \ | Petal width | Species\n Question: Name the species when petal width is 2.0\ \ and petal length is 4.9\n SQL Query: SELECT Species FROM table WHERE Petal\ \ width = 2.0 AND Petal length = 4.9\n " sentences: - What year was the championship in Wimbledon (2)? - Who wrote Series 38? - Name the species when petal width is 2.0 and petal length is 4.9 - source_sentence: "\n Given the Column informations, generate an SQL query for\ \ the following question:\n Column: No. in season | No. in series | Title | Directed\ \ by | Written by | Original air date | U.S. viewers (million)\n Question: How\ \ many millions of U.S. viewers watched the episode that first aired on March\ \ 31, 2013?\n SQL Query: SELECT U.S. viewers (million) FROM table WHERE Original\ \ air date = March 31, 2013\n " sentences: - How many millions of U.S. viewers watched the episode that first aired on March 31, 2013? - How many viewers were there for the premier with 34 - What is Bruce Cerone overall? - source_sentence: "\n Given the Column informations, generate an SQL query for\ \ the following question:\n Column: Nomination | Actors Name | Film Name | Director\ \ | Country\n Question: What was the film Falling up nominated for?\n SQL Query:\ \ SELECT Nomination FROM table WHERE Film Name = Falling Up\n " sentences: - What was the film Falling up nominated for? - Who wrote an episode watched by 19.01 million US viewers? - What player is on the Montreal Alouettes CFl team? model-index: - name: BGE base SQL Matryoshka results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.4676281647562665 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4697065121551833 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.4697065121551833 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.4697065121551833 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.4676281647562665 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.15656883738506108 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.09394130243103667 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.046970651215518334 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.4676281647562665 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4697065121551833 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.4697065121551833 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.4697065121551833 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.46889822604232273 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.4686148549355503 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.4686406337350657 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.46775412520468573 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4697065121551833 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.4697065121551833 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.4697065121551833 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.46775412520468573 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.15656883738506108 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.09394130243103667 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.046970651215518334 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.46775412520468573 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4697065121551833 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.4697065121551833 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.4697065121551833 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4689612062665323 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.46869882856782963 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.4687237988187482 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.46750220430784734 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4697065121551833 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.4697065121551833 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.46976949237939286 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.46750220430784734 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.15656883738506108 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.09394130243103667 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.04697694923793929 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.46750220430784734 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4697065121551833 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.4697065121551833 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.46976949237939286 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4688906637675648 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.4685833648234455 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.468602927990512 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.46769114498047615 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4696435319309737 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.46976949237939286 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.46976949237939286 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.46769114498047615 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.1565478439769912 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.09395389847587858 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.04697694923793929 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.46769114498047615 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4696435319309737 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.46976949237939286 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.46976949237939286 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4689469541953942 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.468661040433304 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.4686773555936371 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.46775412520468573 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4696435319309737 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.4696435319309737 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.4697065121551833 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.46775412520468573 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.1565478439769912 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.09392870638619474 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.046970651215518334 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.46775412520468573 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4696435319309737 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.4696435319309737 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.4697065121551833 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4689578301883334 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.468696204391821 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.46870770760703784 name: Cosine Map@100 --- # BGE base SQL Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("dat-ai/bge-base-for_text2sql") # Run inference sentences = [ '\n Given the Column informations, generate an SQL query for the following question:\n Column: Nomination | Actors Name | Film Name | Director | Country\n Question: What was the film Falling up nominated for?\n SQL Query: SELECT Nomination FROM table WHERE Film Name = Falling Up\n ', 'What was the film Falling up nominated for?', 'Who wrote an episode watched by 19.01 million US viewers?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 | |:--------------------|:-----------|:----------|:-----------|:-----------|:----------| | cosine_accuracy@1 | 0.4676 | 0.4678 | 0.4675 | 0.4677 | 0.4678 | | cosine_accuracy@3 | 0.4697 | 0.4697 | 0.4697 | 0.4696 | 0.4696 | | cosine_accuracy@5 | 0.4697 | 0.4697 | 0.4697 | 0.4698 | 0.4696 | | cosine_accuracy@10 | 0.4697 | 0.4697 | 0.4698 | 0.4698 | 0.4697 | | cosine_precision@1 | 0.4676 | 0.4678 | 0.4675 | 0.4677 | 0.4678 | | cosine_precision@3 | 0.1566 | 0.1566 | 0.1566 | 0.1565 | 0.1565 | | cosine_precision@5 | 0.0939 | 0.0939 | 0.0939 | 0.094 | 0.0939 | | cosine_precision@10 | 0.047 | 0.047 | 0.047 | 0.047 | 0.047 | | cosine_recall@1 | 0.4676 | 0.4678 | 0.4675 | 0.4677 | 0.4678 | | cosine_recall@3 | 0.4697 | 0.4697 | 0.4697 | 0.4696 | 0.4696 | | cosine_recall@5 | 0.4697 | 0.4697 | 0.4697 | 0.4698 | 0.4696 | | cosine_recall@10 | 0.4697 | 0.4697 | 0.4698 | 0.4698 | 0.4697 | | **cosine_ndcg@10** | **0.4689** | **0.469** | **0.4689** | **0.4689** | **0.469** | | cosine_mrr@10 | 0.4686 | 0.4687 | 0.4686 | 0.4687 | 0.4687 | | cosine_map@100 | 0.4686 | 0.4687 | 0.4686 | 0.4687 | 0.4687 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 56,355 training samples * Columns: <code>context</code> and <code>question</code> * Approximate statistics based on the first 1000 samples: | | context | question | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 45 tokens</li><li>mean: 72.61 tokens</li><li>max: 196 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.41 tokens</li><li>max: 36 tokens</li></ul> | * Samples: | context | question | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | <code><br> Given the Column informations, generate an SQL query for the following question:<br> Column: State/territory | Text/background colour | Format | Current slogan | Current series | Notes<br> Question: Tell me what the notes are for South Australia <br> SQL Query: SELECT Notes FROM table WHERE Current slogan = SOUTH AUSTRALIA<br> </code> | <code>Tell me what the notes are for South Australia </code> | | <code><br> Given the Column informations, generate an SQL query for the following question:<br> Column: State/territory | Text/background colour | Format | Current slogan | Current series | Notes<br> Question: What is the current series where the new series began in June 2011?<br> SQL Query: SELECT Current series FROM table WHERE Notes = New series began in June 2011<br> </code> | <code>What is the current series where the new series began in June 2011?</code> | | <code><br> Given the Column informations, generate an SQL query for the following question:<br> Column: State/territory | Text/background colour | Format | Current slogan | Current series | Notes<br> Question: What is the format for South Australia?<br> SQL Query: SELECT Format FROM table WHERE State/territory = South Australia<br> </code> | <code>What is the format for South Australia?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512 ], "matryoshka_weights": [ 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `gradient_accumulation_steps`: 8 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 8 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:----------:|:--------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | 0.0227 | 10 | 1.773 | - | - | - | - | - | | 0.0454 | 20 | 1.3231 | - | - | - | - | - | | 0.0681 | 30 | 0.713 | - | - | - | - | - | | 0.0908 | 40 | 0.286 | - | - | - | - | - | | 0.1135 | 50 | 0.1013 | - | - | - | - | - | | 0.1362 | 60 | 0.0635 | - | - | - | - | - | | 0.1590 | 70 | 0.0453 | - | - | - | - | - | | 0.1817 | 80 | 0.041 | - | - | - | - | - | | 0.2044 | 90 | 0.039 | - | - | - | - | - | | 0.2271 | 100 | 0.027 | - | - | - | - | - | | 0.2498 | 110 | 0.0193 | - | - | - | - | - | | 0.2725 | 120 | 0.0167 | - | - | - | - | - | | 0.2952 | 130 | 0.016 | - | - | - | - | - | | 0.3179 | 140 | 0.0197 | - | - | - | - | - | | 0.3406 | 150 | 0.0217 | - | - | - | - | - | | 0.3633 | 160 | 0.0162 | - | - | - | - | - | | 0.3860 | 170 | 0.012 | - | - | - | - | - | | 0.4087 | 180 | 0.013 | - | - | - | - | - | | 0.4315 | 190 | 0.0255 | - | - | - | - | - | | 0.4542 | 200 | 0.0229 | - | - | - | - | - | | 0.4769 | 210 | 0.0181 | - | - | - | - | - | | 0.4996 | 220 | 0.0195 | - | - | - | - | - | | 0.5223 | 230 | 0.0199 | - | - | - | - | - | | 0.5450 | 240 | 0.0144 | - | - | - | - | - | | 0.5677 | 250 | 0.0102 | - | - | - | - | - | | 0.5904 | 260 | 0.0101 | - | - | - | - | - | | 0.6131 | 270 | 0.0095 | - | - | - | - | - | | 0.6358 | 280 | 0.0173 | - | - | - | - | - | | 0.6585 | 290 | 0.01 | - | - | - | - | - | | 0.6812 | 300 | 0.0129 | - | - | - | - | - | | 0.7039 | 310 | 0.0177 | - | - | - | - | - | | 0.7267 | 320 | 0.0106 | - | - | - | - | - | | 0.7494 | 330 | 0.0146 | - | - | - | - | - | | 0.7721 | 340 | 0.0185 | - | - | - | - | - | | 0.7948 | 350 | 0.0203 | - | - | - | - | - | | 0.8175 | 360 | 0.0146 | - | - | - | - | - | | 0.8402 | 370 | 0.0072 | - | - | - | - | - | | 0.8629 | 380 | 0.0102 | - | - | - | - | - | | 0.8856 | 390 | 0.0075 | - | - | - | - | - | | 0.9083 | 400 | 0.0064 | - | - | - | - | - | | 0.9310 | 410 | 0.0163 | - | - | - | - | - | | 0.9537 | 420 | 0.0069 | - | - | - | - | - | | 0.9764 | 430 | 0.0072 | - | - | - | - | - | | 0.9991 | 440 | 0.0147 | 0.4688 | 0.4689 | 0.4688 | 0.4689 | 0.4689 | | 1.0219 | 450 | 0.0151 | - | - | - | - | - | | 1.0446 | 460 | 0.0135 | - | - | - | - | - | | 1.0673 | 470 | 0.0189 | - | - | - | - | - | | 1.0900 | 480 | 0.0121 | - | - | - | - | - | | 1.1127 | 490 | 0.0064 | - | - | - | - | - | | 1.1354 | 500 | 0.0111 | - | - | - | - | - | | 1.1581 | 510 | 0.0103 | - | - | - | - | - | | 1.1808 | 520 | 0.0144 | - | - | - | - | - | | 1.2035 | 530 | 0.0151 | - | - | - | - | - | | 1.2262 | 540 | 0.0062 | - | - | - | - | - | | 1.2489 | 550 | 0.0104 | - | - | - | - | - | | 1.2716 | 560 | 0.0046 | - | - | - | - | - | | 1.2944 | 570 | 0.0056 | - | - | - | - | - | | 1.3171 | 580 | 0.0073 | - | - | - | - | - | | 1.3398 | 590 | 0.007 | - | - | - | - | - | | 1.3625 | 600 | 0.0074 | - | - | - | - | - | | 1.3852 | 610 | 0.0057 | - | - | - | - | - | | 1.4079 | 620 | 0.0052 | - | - | - | - | - | | 1.4306 | 630 | 0.0114 | - | - | - | - | - | | 1.4533 | 640 | 0.0075 | - | - | - | - | - | | 1.4760 | 650 | 0.0116 | - | - | - | - | - | | 1.4987 | 660 | 0.0092 | - | - | - | - | - | | 1.5214 | 670 | 0.0137 | - | - | - | - | - | | 1.5441 | 680 | 0.0066 | - | - | - | - | - | | 1.5668 | 690 | 0.0042 | - | - | - | - | - | | 1.5896 | 700 | 0.0036 | - | - | - | - | - | | 1.6123 | 710 | 0.0039 | - | - | - | - | - | | 1.6350 | 720 | 0.0065 | - | - | - | - | - | | 1.6577 | 730 | 0.0051 | - | - | - | - | - | | 1.6804 | 740 | 0.0054 | - | - | - | - | - | | 1.7031 | 750 | 0.0086 | - | - | - | - | - | | 1.7258 | 760 | 0.0062 | - | - | - | - | - | | 1.7485 | 770 | 0.0071 | - | - | - | - | - | | 1.7712 | 780 | 0.0108 | - | - | - | - | - | | 1.7939 | 790 | 0.009 | - | - | - | - | - | | 1.8166 | 800 | 0.0075 | - | - | - | - | - | | 1.8393 | 810 | 0.0039 | - | - | - | - | - | | 1.8620 | 820 | 0.0047 | - | - | - | - | - | | 1.8848 | 830 | 0.0037 | - | - | - | - | - | | 1.9075 | 840 | 0.0037 | - | - | - | - | - | | 1.9302 | 850 | 0.0064 | - | - | - | - | - | | 1.9529 | 860 | 0.0047 | - | - | - | - | - | | 1.9756 | 870 | 0.0034 | - | - | - | - | - | | 1.9983 | 880 | 0.0061 | 0.4689 | 0.4689 | 0.4689 | 0.4690 | 0.4690 | | 2.0210 | 890 | 0.0096 | - | - | - | - | - | | 2.0437 | 900 | 0.0071 | - | - | - | - | - | | 2.0664 | 910 | 0.0101 | - | - | - | - | - | | 2.0891 | 920 | 0.0054 | - | - | - | - | - | | 2.1118 | 930 | 0.0039 | - | - | - | - | - | | 2.1345 | 940 | 0.0074 | - | - | - | - | - | | 2.1573 | 950 | 0.0044 | - | - | - | - | - | | 2.1800 | 960 | 0.0088 | - | - | - | - | - | | 2.2027 | 970 | 0.0096 | - | - | - | - | - | | 2.2254 | 980 | 0.0057 | - | - | - | - | - | | 2.2481 | 990 | 0.0063 | - | - | - | - | - | | 2.2708 | 1000 | 0.0026 | - | - | - | - | - | | 2.2935 | 1010 | 0.0032 | - | - | - | - | - | | 2.3162 | 1020 | 0.0027 | - | - | - | - | - | | 2.3389 | 1030 | 0.0041 | - | - | - | - | - | | 2.3616 | 1040 | 0.0052 | - | - | - | - | - | | 2.3843 | 1050 | 0.0035 | - | - | - | - | - | | 2.4070 | 1060 | 0.0025 | - | - | - | - | - | | 2.4297 | 1070 | 0.0059 | - | - | - | - | - | | 2.4525 | 1080 | 0.0048 | - | - | - | - | - | | 2.4752 | 1090 | 0.0064 | - | - | - | - | - | | 2.4979 | 1100 | 0.0066 | - | - | - | - | - | | 2.5206 | 1110 | 0.0078 | - | - | - | - | - | | 2.5433 | 1120 | 0.0057 | - | - | - | - | - | | 2.5660 | 1130 | 0.0026 | - | - | - | - | - | | 2.5887 | 1140 | 0.0021 | - | - | - | - | - | | 2.6114 | 1150 | 0.0021 | - | - | - | - | - | | 2.6341 | 1160 | 0.0047 | - | - | - | - | - | | 2.6568 | 1170 | 0.0034 | - | - | - | - | - | | 2.6795 | 1180 | 0.0044 | - | - | - | - | - | | 2.7022 | 1190 | 0.0058 | - | - | - | - | - | | 2.7250 | 1200 | 0.0043 | - | - | - | - | - | | 2.7477 | 1210 | 0.0056 | - | - | - | - | - | | 2.7704 | 1220 | 0.0076 | - | - | - | - | - | | 2.7931 | 1230 | 0.0063 | - | - | - | - | - | | 2.8158 | 1240 | 0.0033 | - | - | - | - | - | | 2.8385 | 1250 | 0.0025 | - | - | - | - | - | | 2.8612 | 1260 | 0.0019 | - | - | - | - | - | | 2.8839 | 1270 | 0.0052 | - | - | - | - | - | | 2.9066 | 1280 | 0.0021 | - | - | - | - | - | | 2.9293 | 1290 | 0.0041 | - | - | - | - | - | | 2.9520 | 1300 | 0.0035 | - | - | - | - | - | | 2.9747 | 1310 | 0.0044 | - | - | - | - | - | | 2.9974 | 1320 | 0.0035 | - | - | - | - | - | | **2.9997** | **1321** | **-** | **0.469** | **0.469** | **0.469** | **0.469** | **0.469** | | 3.0202 | 1330 | 0.0062 | - | - | - | - | - | | 3.0429 | 1340 | 0.0047 | - | - | - | - | - | | 3.0656 | 1350 | 0.008 | - | - | - | - | - | | 3.0883 | 1360 | 0.0033 | - | - | - | - | - | | 3.1110 | 1370 | 0.0025 | - | - | - | - | - | | 3.1337 | 1380 | 0.0069 | - | - | - | - | - | | 3.1564 | 1390 | 0.0035 | - | - | - | - | - | | 3.1791 | 1400 | 0.0085 | - | - | - | - | - | | 3.2018 | 1410 | 0.007 | - | - | - | - | - | | 3.2245 | 1420 | 0.007 | - | - | - | - | - | | 3.2472 | 1430 | 0.0052 | - | - | - | - | - | | 3.2699 | 1440 | 0.0019 | - | - | - | - | - | | 3.2926 | 1450 | 0.0022 | - | - | - | - | - | | 3.3154 | 1460 | 0.0019 | - | - | - | - | - | | 3.3381 | 1470 | 0.0028 | - | - | - | - | - | | 3.3608 | 1480 | 0.0042 | - | - | - | - | - | | 3.3835 | 1490 | 0.0023 | - | - | - | - | - | | 3.4062 | 1500 | 0.0024 | - | - | - | - | - | | 3.4289 | 1510 | 0.0036 | - | - | - | - | - | | 3.4516 | 1520 | 0.0038 | - | - | - | - | - | | 3.4743 | 1530 | 0.0063 | - | - | - | - | - | | 3.4970 | 1540 | 0.0044 | - | - | - | - | - | | 3.5197 | 1550 | 0.0064 | - | - | - | - | - | | 3.5424 | 1560 | 0.0053 | - | - | - | - | - | | 3.5651 | 1570 | 0.0019 | - | - | - | - | - | | 3.5879 | 1580 | 0.0019 | - | - | - | - | - | | 3.6106 | 1590 | 0.0017 | - | - | - | - | - | | 3.6333 | 1600 | 0.004 | - | - | - | - | - | | 3.6560 | 1610 | 0.0026 | - | - | - | - | - | | 3.6787 | 1620 | 0.0031 | - | - | - | - | - | | 3.7014 | 1630 | 0.0043 | - | - | - | - | - | | 3.7241 | 1640 | 0.0032 | - | - | - | - | - | | 3.7468 | 1650 | 0.0041 | - | - | - | - | - | | 3.7695 | 1660 | 0.0069 | - | - | - | - | - | | 3.7922 | 1670 | 0.0063 | - | - | - | - | - | | 3.8149 | 1680 | 0.0038 | - | - | - | - | - | | 3.8376 | 1690 | 0.0024 | - | - | - | - | - | | 3.8603 | 1700 | 0.0018 | - | - | - | - | - | | 3.8831 | 1710 | 0.0034 | - | - | - | - | - | | 3.9058 | 1720 | 0.0016 | - | - | - | - | - | | 3.9285 | 1730 | 0.0026 | - | - | - | - | - | | 3.9512 | 1740 | 0.0037 | - | - | - | - | - | | 3.9739 | 1750 | 0.0024 | - | - | - | - | - | | 3.9966 | 1760 | 0.0027 | 0.4689 | 0.4690 | 0.4689 | 0.4689 | 0.4690 | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.3.0 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.34.2 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# BGE base SQL Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("dat-ai/bge-base-for_text2sql") # Run inference sentences = [ '\n Given the Column informations, generate an SQL query for the following question:\n Column: Nomination | Actors Name | Film Name | Director | Country\n Question: What was the film Falling up nominated for?\n SQL Query: SELECT Nomination FROM table WHERE Film Name = Falling Up\n ', 'What was the film Falling up nominated for?', 'Who wrote an episode watched by 19.01 million US viewers?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 | |:--------------------|:-----------|:----------|:-----------|:-----------|:----------| | cosine_accuracy@1 | 0.4676 | 0.4678 | 0.4675 | 0.4677 | 0.4678 | | cosine_accuracy@3 | 0.4697 | 0.4697 | 0.4697 | 0.4696 | 0.4696 | | cosine_accuracy@5 | 0.4697 | 0.4697 | 0.4697 | 0.4698 | 0.4696 | | cosine_accuracy@10 | 0.4697 | 0.4697 | 0.4698 | 0.4698 | 0.4697 | | cosine_precision@1 | 0.4676 | 0.4678 | 0.4675 | 0.4677 | 0.4678 | | cosine_precision@3 | 0.1566 | 0.1566 | 0.1566 | 0.1565 | 0.1565 | | cosine_precision@5 | 0.0939 | 0.0939 | 0.0939 | 0.094 | 0.0939 | | cosine_precision@10 | 0.047 | 0.047 | 0.047 | 0.047 | 0.047 | | cosine_recall@1 | 0.4676 | 0.4678 | 0.4675 | 0.4677 | 0.4678 | | cosine_recall@3 | 0.4697 | 0.4697 | 0.4697 | 0.4696 | 0.4696 | | cosine_recall@5 | 0.4697 | 0.4697 | 0.4697 | 0.4698 | 0.4696 | | cosine_recall@10 | 0.4697 | 0.4697 | 0.4698 | 0.4698 | 0.4697 | | **cosine_ndcg@10** | **0.4689** | **0.469** | **0.4689** | **0.4689** | **0.469** | | cosine_mrr@10 | 0.4686 | 0.4687 | 0.4686 | 0.4687 | 0.4687 | | cosine_map@100 | 0.4686 | 0.4687 | 0.4686 | 0.4687 | 0.4687 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 56,355 training samples * Columns: <code>context</code> and <code>question</code> * Approximate statistics based on the first 1000 samples: | | context | question | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 45 tokens</li><li>mean: 72.61 tokens</li><li>max: 196 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.41 tokens</li><li>max: 36 tokens</li></ul> | * Samples: | context | question | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | <code><br> Given the Column informations, generate an SQL query for the following question:<br> Column: State/territory | Text/background colour | Format | Current slogan | Current series | Notes<br> Question: Tell me what the notes are for South Australia <br> SQL Query: SELECT Notes FROM table WHERE Current slogan = SOUTH AUSTRALIA<br> </code> | <code>Tell me what the notes are for South Australia </code> | | <code><br> Given the Column informations, generate an SQL query for the following question:<br> Column: State/territory | Text/background colour | Format | Current slogan | Current series | Notes<br> Question: What is the current series where the new series began in June 2011?<br> SQL Query: SELECT Current series FROM table WHERE Notes = New series began in June 2011<br> </code> | <code>What is the current series where the new series began in June 2011?</code> | | <code><br> Given the Column informations, generate an SQL query for the following question:<br> Column: State/territory | Text/background colour | Format | Current slogan | Current series | Notes<br> Question: What is the format for South Australia?<br> SQL Query: SELECT Format FROM table WHERE State/territory = South Australia<br> </code> | <code>What is the format for South Australia?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512 ], "matryoshka_weights": [ 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `gradient_accumulation_steps`: 8 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 8 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:----------:|:--------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | 0.0227 | 10 | 1.773 | - | - | - | - | - | | 0.0454 | 20 | 1.3231 | - | - | - | - | - | | 0.0681 | 30 | 0.713 | - | - | - | - | - | | 0.0908 | 40 | 0.286 | - | - | - | - | - | | 0.1135 | 50 | 0.1013 | - | - | - | - | - | | 0.1362 | 60 | 0.0635 | - | - | - | - | - | | 0.1590 | 70 | 0.0453 | - | - | - | - | - | | 0.1817 | 80 | 0.041 | - | - | - | - | - | | 0.2044 | 90 | 0.039 | - | - | - | - | - | | 0.2271 | 100 | 0.027 | - | - | - | - | - | | 0.2498 | 110 | 0.0193 | - | - | - | - | - | | 0.2725 | 120 | 0.0167 | - | - | - | - | - | | 0.2952 | 130 | 0.016 | - | - | - | - | - | | 0.3179 | 140 | 0.0197 | - | - | - | - | - | | 0.3406 | 150 | 0.0217 | - | - | - | - | - | | 0.3633 | 160 | 0.0162 | - | - | - | - | - | | 0.3860 | 170 | 0.012 | - | - | - | - | - | | 0.4087 | 180 | 0.013 | - | - | - | - | - | | 0.4315 | 190 | 0.0255 | - | - | - | - | - | | 0.4542 | 200 | 0.0229 | - | - | - | - | - | | 0.4769 | 210 | 0.0181 | - | - | - | - | - | | 0.4996 | 220 | 0.0195 | - | - | - | - | - | | 0.5223 | 230 | 0.0199 | - | - | - | - | - | | 0.5450 | 240 | 0.0144 | - | - | - | - | - | | 0.5677 | 250 | 0.0102 | - | - | - | - | - | | 0.5904 | 260 | 0.0101 | - | - | - | - | - | | 0.6131 | 270 | 0.0095 | - | - | - | - | - | | 0.6358 | 280 | 0.0173 | - | - | - | - | - | | 0.6585 | 290 | 0.01 | - | - | - | - | - | | 0.6812 | 300 | 0.0129 | - | - | - | - | - | | 0.7039 | 310 | 0.0177 | - | - | - | - | - | | 0.7267 | 320 | 0.0106 | - | - | - | - | - | | 0.7494 | 330 | 0.0146 | - | - | - | - | - | | 0.7721 | 340 | 0.0185 | - | - | - | - | - | | 0.7948 | 350 | 0.0203 | - | - | - | - | - | | 0.8175 | 360 | 0.0146 | - | - | - | - | - | | 0.8402 | 370 | 0.0072 | - | - | - | - | - | | 0.8629 | 380 | 0.0102 | - | - | - | - | - | | 0.8856 | 390 | 0.0075 | - | - | - | - | - | | 0.9083 | 400 | 0.0064 | - | - | - | - | - | | 0.9310 | 410 | 0.0163 | - | - | - | - | - | | 0.9537 | 420 | 0.0069 | - | - | - | - | - | | 0.9764 | 430 | 0.0072 | - | - | - | - | - | | 0.9991 | 440 | 0.0147 | 0.4688 | 0.4689 | 0.4688 | 0.4689 | 0.4689 | | 1.0219 | 450 | 0.0151 | - | - | - | - | - | | 1.0446 | 460 | 0.0135 | - | - | - | - | - | | 1.0673 | 470 | 0.0189 | - | - | - | - | - | | 1.0900 | 480 | 0.0121 | - | - | - | - | - | | 1.1127 | 490 | 0.0064 | - | - | - | - | - | | 1.1354 | 500 | 0.0111 | - | - | - | - | - | | 1.1581 | 510 | 0.0103 | - | - | - | - | - | | 1.1808 | 520 | 0.0144 | - | - | - | - | - | | 1.2035 | 530 | 0.0151 | - | - | - | - | - | | 1.2262 | 540 | 0.0062 | - | - | - | - | - | | 1.2489 | 550 | 0.0104 | - | - | - | - | - | | 1.2716 | 560 | 0.0046 | - | - | - | - | - | | 1.2944 | 570 | 0.0056 | - | - | - | - | - | | 1.3171 | 580 | 0.0073 | - | - | - | - | - | | 1.3398 | 590 | 0.007 | - | - | - | - | - | | 1.3625 | 600 | 0.0074 | - | - | - | - | - | | 1.3852 | 610 | 0.0057 | - | - | - | - | - | | 1.4079 | 620 | 0.0052 | - | - | - | - | - | | 1.4306 | 630 | 0.0114 | - | - | - | - | - | | 1.4533 | 640 | 0.0075 | - | - | - | - | - | | 1.4760 | 650 | 0.0116 | - | - | - | - | - | | 1.4987 | 660 | 0.0092 | - | - | - | - | - | | 1.5214 | 670 | 0.0137 | - | - | - | - | - | | 1.5441 | 680 | 0.0066 | - | - | - | - | - | | 1.5668 | 690 | 0.0042 | - | - | - | - | - | | 1.5896 | 700 | 0.0036 | - | - | - | - | - | | 1.6123 | 710 | 0.0039 | - | - | - | - | - | | 1.6350 | 720 | 0.0065 | - | - | - | - | - | | 1.6577 | 730 | 0.0051 | - | - | - | - | - | | 1.6804 | 740 | 0.0054 | - | - | - | - | - | | 1.7031 | 750 | 0.0086 | - | - | - | - | - | | 1.7258 | 760 | 0.0062 | - | - | - | - | - | | 1.7485 | 770 | 0.0071 | - | - | - | - | - | | 1.7712 | 780 | 0.0108 | - | - | - | - | - | | 1.7939 | 790 | 0.009 | - | - | - | - | - | | 1.8166 | 800 | 0.0075 | - | - | - | - | - | | 1.8393 | 810 | 0.0039 | - | - | - | - | - | | 1.8620 | 820 | 0.0047 | - | - | - | - | - | | 1.8848 | 830 | 0.0037 | - | - | - | - | - | | 1.9075 | 840 | 0.0037 | - | - | - | - | - | | 1.9302 | 850 | 0.0064 | - | - | - | - | - | | 1.9529 | 860 | 0.0047 | - | - | - | - | - | | 1.9756 | 870 | 0.0034 | - | - | - | - | - | | 1.9983 | 880 | 0.0061 | 0.4689 | 0.4689 | 0.4689 | 0.4690 | 0.4690 | | 2.0210 | 890 | 0.0096 | - | - | - | - | - | | 2.0437 | 900 | 0.0071 | - | - | - | - | - | | 2.0664 | 910 | 0.0101 | - | - | - | - | - | | 2.0891 | 920 | 0.0054 | - | - | - | - | - | | 2.1118 | 930 | 0.0039 | - | - | - | - | - | | 2.1345 | 940 | 0.0074 | - | - | - | - | - | | 2.1573 | 950 | 0.0044 | - | - | - | - | - | | 2.1800 | 960 | 0.0088 | - | - | - | - | - | | 2.2027 | 970 | 0.0096 | - | - | - | - | - | | 2.2254 | 980 | 0.0057 | - | - | - | - | - | | 2.2481 | 990 | 0.0063 | - | - | - | - | - | | 2.2708 | 1000 | 0.0026 | - | - | - | - | - | | 2.2935 | 1010 | 0.0032 | - | - | - | - | - | | 2.3162 | 1020 | 0.0027 | - | - | - | - | - | | 2.3389 | 1030 | 0.0041 | - | - | - | - | - | | 2.3616 | 1040 | 0.0052 | - | - | - | - | - | | 2.3843 | 1050 | 0.0035 | - | - | - | - | - | | 2.4070 | 1060 | 0.0025 | - | - | - | - | - | | 2.4297 | 1070 | 0.0059 | - | - | - | - | - | | 2.4525 | 1080 | 0.0048 | - | - | - | - | - | | 2.4752 | 1090 | 0.0064 | - | - | - | - | - | | 2.4979 | 1100 | 0.0066 | - | - | - | - | - | | 2.5206 | 1110 | 0.0078 | - | - | - | - | - | | 2.5433 | 1120 | 0.0057 | - | - | - | - | - | | 2.5660 | 1130 | 0.0026 | - | - | - | - | - | | 2.5887 | 1140 | 0.0021 | - | - | - | - | - | | 2.6114 | 1150 | 0.0021 | - | - | - | - | - | | 2.6341 | 1160 | 0.0047 | - | - | - | - | - | | 2.6568 | 1170 | 0.0034 | - | - | - | - | - | | 2.6795 | 1180 | 0.0044 | - | - | - | - | - | | 2.7022 | 1190 | 0.0058 | - | - | - | - | - | | 2.7250 | 1200 | 0.0043 | - | - | - | - | - | | 2.7477 | 1210 | 0.0056 | - | - | - | - | - | | 2.7704 | 1220 | 0.0076 | - | - | - | - | - | | 2.7931 | 1230 | 0.0063 | - | - | - | - | - | | 2.8158 | 1240 | 0.0033 | - | - | - | - | - | | 2.8385 | 1250 | 0.0025 | - | - | - | - | - | | 2.8612 | 1260 | 0.0019 | - | - | - | - | - | | 2.8839 | 1270 | 0.0052 | - | - | - | - | - | | 2.9066 | 1280 | 0.0021 | - | - | - | - | - | | 2.9293 | 1290 | 0.0041 | - | - | - | - | - | | 2.9520 | 1300 | 0.0035 | - | - | - | - | - | | 2.9747 | 1310 | 0.0044 | - | - | - | - | - | | 2.9974 | 1320 | 0.0035 | - | - | - | - | - | | **2.9997** | **1321** | **-** | **0.469** | **0.469** | **0.469** | **0.469** | **0.469** | | 3.0202 | 1330 | 0.0062 | - | - | - | - | - | | 3.0429 | 1340 | 0.0047 | - | - | - | - | - | | 3.0656 | 1350 | 0.008 | - | - | - | - | - | | 3.0883 | 1360 | 0.0033 | - | - | - | - | - | | 3.1110 | 1370 | 0.0025 | - | - | - | - | - | | 3.1337 | 1380 | 0.0069 | - | - | - | - | - | | 3.1564 | 1390 | 0.0035 | - | - | - | - | - | | 3.1791 | 1400 | 0.0085 | - | - | - | - | - | | 3.2018 | 1410 | 0.007 | - | - | - | - | - | | 3.2245 | 1420 | 0.007 | - | - | - | - | - | | 3.2472 | 1430 | 0.0052 | - | - | - | - | - | | 3.2699 | 1440 | 0.0019 | - | - | - | - | - | | 3.2926 | 1450 | 0.0022 | - | - | - | - | - | | 3.3154 | 1460 | 0.0019 | - | - | - | - | - | | 3.3381 | 1470 | 0.0028 | - | - | - | - | - | | 3.3608 | 1480 | 0.0042 | - | - | - | - | - | | 3.3835 | 1490 | 0.0023 | - | - | - | - | - | | 3.4062 | 1500 | 0.0024 | - | - | - | - | - | | 3.4289 | 1510 | 0.0036 | - | - | - | - | - | | 3.4516 | 1520 | 0.0038 | - | - | - | - | - | | 3.4743 | 1530 | 0.0063 | - | - | - | - | - | | 3.4970 | 1540 | 0.0044 | - | - | - | - | - | | 3.5197 | 1550 | 0.0064 | - | - | - | - | - | | 3.5424 | 1560 | 0.0053 | - | - | - | - | - | | 3.5651 | 1570 | 0.0019 | - | - | - | - | - | | 3.5879 | 1580 | 0.0019 | - | - | - | - | - | | 3.6106 | 1590 | 0.0017 | - | - | - | - | - | | 3.6333 | 1600 | 0.004 | - | - | - | - | - | | 3.6560 | 1610 | 0.0026 | - | - | - | - | - | | 3.6787 | 1620 | 0.0031 | - | - | - | - | - | | 3.7014 | 1630 | 0.0043 | - | - | - | - | - | | 3.7241 | 1640 | 0.0032 | - | - | - | - | - | | 3.7468 | 1650 | 0.0041 | - | - | - | - | - | | 3.7695 | 1660 | 0.0069 | - | - | - | - | - | | 3.7922 | 1670 | 0.0063 | - | - | - | - | - | | 3.8149 | 1680 | 0.0038 | - | - | - | - | - | | 3.8376 | 1690 | 0.0024 | - | - | - | - | - | | 3.8603 | 1700 | 0.0018 | - | - | - | - | - | | 3.8831 | 1710 | 0.0034 | - | - | - | - | - | | 3.9058 | 1720 | 0.0016 | - | - | - | - | - | | 3.9285 | 1730 | 0.0026 | - | - | - | - | - | | 3.9512 | 1740 | 0.0037 | - | - | - | - | - | | 3.9739 | 1750 | 0.0024 | - | - | - | - | - | | 3.9966 | 1760 | 0.0027 | 0.4689 | 0.4690 | 0.4689 | 0.4689 | 0.4690 | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.3.0 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.34.2 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-base-en-v1.5", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:56355", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "\n Given the Column informations, generate an SQL query for the following question:\n Column: Finishing position | Points awarded (Platinum) | Points awarded (Gold) | Points awarded (Silver) | Points awarded (Satellite)\n Question: How many platinum points were awarded when 6 gold points were awarded?\n SQL Query: SELECT MAX Points awarded (Platinum) FROM table WHERE Points awarded (Gold) = 6\n ", "sentences": ["How many platinum points were awarded when 6 gold points were awarded?", "Did any team score games that totaled up to 860.5?", "Who had the pole position at the German Grand Prix?"]}, {"source_sentence": "\n Given the Column informations, generate an SQL query for the following question:\n Column: Player | No. | Nationality | Position | Years in Toronto | School/Club Team\n Question: What's Dell Curry nationality?\n SQL Query: SELECT Nationality FROM table WHERE Player = Dell Curry\n ", "sentences": ["What is the title when original air date is may15,2008?", "What's Dell Curry nationality?", "What's the minimum total attendance of the Premier League association football?"]}, {"source_sentence": "\n Given the Column informations, generate an SQL query for the following question:\n Column: Sepal length | Sepal width | Petal length | Petal width | Species\n Question: Name the species when petal width is 2.0 and petal length is 4.9\n SQL Query: SELECT Species FROM table WHERE Petal width = 2.0 AND Petal length = 4.9\n ", "sentences": ["What year was the championship in Wimbledon (2)?", "Who wrote Series 38?", "Name the species when petal width is 2.0 and petal length is 4.9"]}, {"source_sentence": "\n Given the Column informations, generate an SQL query for the following question:\n Column: No. in season | No. in series | Title | Directed by | Written by | Original air date | U.S. viewers (million)\n Question: How many millions of U.S. viewers watched the episode that first aired on March 31, 2013?\n SQL Query: SELECT U.S. viewers (million) FROM table WHERE Original air date = March 31, 2013\n ", "sentences": ["How many millions of U.S. viewers watched the episode that first aired on March 31, 2013?", "How many viewers were there for the premier with 34", "What is Bruce Cerone overall?"]}, {"source_sentence": "\n Given the Column informations, generate an SQL query for the following question:\n Column: Nomination | Actors Name | Film Name | Director | Country\n Question: What was the film Falling up nominated for?\n SQL Query: SELECT Nomination FROM table WHERE Film Name = Falling Up\n ", "sentences": ["What was the film Falling up nominated for?", "Who wrote an episode watched by 19.01 million US viewers?", "What player is on the Montreal Alouettes CFl team?"]}], "model-index": [{"name": "BGE base SQL Matryoshka", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.4676281647562665, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.4697065121551833, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.4697065121551833, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.4697065121551833, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.4676281647562665, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.15656883738506108, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.09394130243103667, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.046970651215518334, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.4676281647562665, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.4697065121551833, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.4697065121551833, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.4697065121551833, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.46889822604232273, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.4686148549355503, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.4686406337350657, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.46775412520468573, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.4697065121551833, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.4697065121551833, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.4697065121551833, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.46775412520468573, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.15656883738506108, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.09394130243103667, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.046970651215518334, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.46775412520468573, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.4697065121551833, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.4697065121551833, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.4697065121551833, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4689612062665323, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.46869882856782963, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.4687237988187482, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.46750220430784734, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.4697065121551833, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.4697065121551833, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.46976949237939286, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.46750220430784734, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.15656883738506108, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.09394130243103667, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.04697694923793929, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.46750220430784734, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.4697065121551833, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.4697065121551833, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.46976949237939286, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4688906637675648, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.4685833648234455, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.468602927990512, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.46769114498047615, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.4696435319309737, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.46976949237939286, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.46976949237939286, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.46769114498047615, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.1565478439769912, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.09395389847587858, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.04697694923793929, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.46769114498047615, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.4696435319309737, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.46976949237939286, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.46976949237939286, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4689469541953942, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.468661040433304, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.4686773555936371, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.46775412520468573, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.4696435319309737, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.4696435319309737, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.4697065121551833, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.46775412520468573, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.1565478439769912, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.09392870638619474, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.046970651215518334, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.46775412520468573, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.4696435319309737, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.4696435319309737, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.4697065121551833, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4689578301883334, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.468696204391821, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.46870770760703784, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,959
MultiBertGunjanPatrick/multiberts-seed-4-2000k
MultiBertGunjanPatrick
null
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-10-04T05:12:58+00:00
99
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 tags: - exbert - multiberts - multiberts-seed-4 --- # MultiBERTs Seed 4 Checkpoint 2000k (uncased) Seed 4 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-2000k') model = BertModel.from_pretrained("multiberts-seed-4-2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
null
Non_BioNLP
# MultiBERTs Seed 4 Checkpoint 2000k (uncased) Seed 4 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-2000k') model = BertModel.from_pretrained("multiberts-seed-4-2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-4"]}
task
[ "QUESTION_ANSWERING" ]
42,960
Nutanix/bge-base-mbpp
Nutanix
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:317521", "loss:TripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-08-19T22:08:25Z
2024-08-19T22:08:42+00:00
7
0
--- base_model: BAAI/bge-base-en-v1.5 datasets: [] language: [] library_name: sentence-transformers metrics: - cosine_accuracy - dot_accuracy - manhattan_accuracy - euclidean_accuracy - max_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:317521 - loss:TripletLoss widget: - source_sentence: Write a function to extract every specified element from a given two dimensional list. sentences: - "def nCr_mod_p(n, r, p): \r\n\tif (r > n- r): \r\n\t\tr = n - r \r\n\tC = [0 for\ \ i in range(r + 1)] \r\n\tC[0] = 1 \r\n\tfor i in range(1, n + 1): \r\n\t\tfor\ \ j in range(min(i, r), 0, -1): \r\n\t\t\tC[j] = (C[j] + C[j-1]) % p \r\n\treturn\ \ C[r] " - "import cmath\r\ndef len_complex(a,b):\r\n cn=complex(a,b)\r\n length=abs(cn)\r\ \n return length" - "def specified_element(nums, N):\r\n result = [i[N] for i in nums]\r\n return\ \ result" - source_sentence: Write a python function to find the kth element in an array containing odd elements first and then even elements. sentences: - "def get_Number(n, k): \r\n arr = [0] * n; \r\n i = 0; \r\n odd = 1;\ \ \r\n while (odd <= n): \r\n arr[i] = odd; \r\n i += 1; \r\ \n odd += 2;\r\n even = 2; \r\n while (even <= n): \r\n arr[i]\ \ = even; \r\n i += 1;\r\n even += 2; \r\n return arr[k - 1]; " - "def sort_matrix(M):\r\n result = sorted(M, key=sum)\r\n return result" - "INT_BITS = 32\r\ndef left_Rotate(n,d): \r\n return (n << d)|(n >> (INT_BITS\ \ - d)) " - source_sentence: Write a function to remove all the words with k length in the given string. sentences: - "def remove_tuples(test_list, K):\r\n res = [ele for ele in test_list if len(ele)\ \ != K]\r\n return (res) " - "def is_Sub_Array(A,B,n,m): \r\n i = 0; j = 0; \r\n while (i < n and j <\ \ m): \r\n if (A[i] == B[j]): \r\n i += 1; \r\n \ \ j += 1; \r\n if (j == m): \r\n return True; \r\n\ \ else: \r\n i = i - j + 1; \r\n j = 0; \r\n\ \ return False; " - "def remove_length(test_str, K):\r\n temp = test_str.split()\r\n res = [ele\ \ for ele in temp if len(ele) != K]\r\n res = ' '.join(res)\r\n return (res) " - source_sentence: Write a function to find the occurence of characters 'std' in the given string 1. list item 1. list item 1. list item 2. list item 2. list item 2. list item sentences: - "def magic_square_test(my_matrix):\r\n iSize = len(my_matrix[0])\r\n sum_list\ \ = []\r\n sum_list.extend([sum (lines) for lines in my_matrix]) \r\n \ \ for col in range(iSize):\r\n sum_list.append(sum(row[col] for row in\ \ my_matrix))\r\n result1 = 0\r\n for i in range(0,iSize):\r\n result1\ \ +=my_matrix[i][i]\r\n sum_list.append(result1) \r\n result2 = 0\r\ \n for i in range(iSize-1,-1,-1):\r\n result2 +=my_matrix[i][i]\r\n\ \ sum_list.append(result2)\r\n if len(set(sum_list))>1:\r\n return\ \ False\r\n return True" - "def count_occurance(s):\r\n count=0\r\n for i in range(len(s)):\r\n if (s[i]==\ \ 's' and s[i+1]=='t' and s[i+2]== 'd'):\r\n count = count + 1\r\n return\ \ count" - "def power(a,b):\r\n\tif b==0:\r\n\t\treturn 1\r\n\telif a==0:\r\n\t\treturn 0\r\ \n\telif b==1:\r\n\t\treturn a\r\n\telse:\r\n\t\treturn a*power(a,b-1)" - source_sentence: Write a function to find sum and average of first n natural numbers. sentences: - "def long_words(n, str):\r\n word_len = []\r\n txt = str.split(\" \")\r\n\ \ for x in txt:\r\n if len(x) > n:\r\n word_len.append(x)\r\ \n return word_len\t" - "def long_words(n, str):\r\n word_len = []\r\n txt = str.split(\" \")\r\n\ \ for x in txt:\r\n if len(x) > n:\r\n word_len.append(x)\r\ \n return word_len\t" - "def sum_average(number):\r\n total = 0\r\n for value in range(1, number + 1):\r\ \n total = total + value\r\n average = total / number\r\n return (total,average)" model-index: - name: SentenceTransformer based on BAAI/bge-base-en-v1.5 results: - task: type: triplet name: Triplet dataset: name: sts dev type: sts-dev metrics: - type: cosine_accuracy value: 0.997141408425864 name: Cosine Accuracy - type: dot_accuracy value: 0.0028145001873883936 name: Dot Accuracy - type: manhattan_accuracy value: 0.99605382088609 name: Manhattan Accuracy - type: euclidean_accuracy value: 0.997141408425864 name: Euclidean Accuracy - type: max_accuracy value: 0.997141408425864 name: Max Accuracy --- # SentenceTransformer based on BAAI/bge-base-en-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Nutanix/bge-base-mbpp") # Run inference sentences = [ 'Write a function to find sum and average of first n natural numbers.', 'def sum_average(number):\r\n total = 0\r\n for value in range(1, number + 1):\r\n total = total + value\r\n average = total / number\r\n return (total,average)', 'def long_words(n, str):\r\n word_len = []\r\n txt = str.split(" ")\r\n for x in txt:\r\n if len(x) > n:\r\n word_len.append(x)\r\n return word_len\t', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `sts-dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:-------------------|:-----------| | cosine_accuracy | 0.9971 | | dot_accuracy | 0.0028 | | manhattan_accuracy | 0.9961 | | euclidean_accuracy | 0.9971 | | **max_accuracy** | **0.9971** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | sts-dev_max_accuracy | |:------:|:-----:|:-------------:|:--------------------:| | 0.0050 | 100 | 4.3364 | - | | 0.0101 | 200 | 4.122 | - | | 0.0151 | 300 | 4.0825 | - | | 0.0202 | 400 | 4.0381 | - | | 0.0252 | 500 | 4.015 | - | | 0.0302 | 600 | 3.9996 | - | | 0.0353 | 700 | 3.9567 | - | | 0.0403 | 800 | 3.9593 | - | | 0.0453 | 900 | 3.9456 | - | | 0.0504 | 1000 | 3.938 | - | | 0.0554 | 1100 | 3.933 | - | | 0.0605 | 1200 | 3.905 | - | | 0.0655 | 1300 | 3.906 | - | | 0.0705 | 1400 | 3.9073 | - | | 0.0756 | 1500 | 3.9193 | - | | 0.0806 | 1600 | 3.9016 | - | | 0.0857 | 1700 | 3.8899 | - | | 0.0907 | 1800 | 3.9 | - | | 0.0957 | 1900 | 3.8983 | - | | 0.1008 | 2000 | 3.876 | - | | 0.1058 | 2100 | 3.9001 | - | | 0.1109 | 2200 | 3.8818 | - | | 0.1159 | 2300 | 3.8788 | - | | 0.1209 | 2400 | 3.8815 | - | | 0.1260 | 2500 | 3.8664 | - | | 0.1310 | 2600 | 3.854 | - | | 0.1360 | 2700 | 3.8674 | - | | 0.1411 | 2800 | 3.8525 | - | | 0.1461 | 2900 | 3.8733 | - | | 0.1512 | 3000 | 3.8538 | - | | 0.1562 | 3100 | 3.8348 | - | | 0.1612 | 3200 | 3.8378 | - | | 0.1663 | 3300 | 3.8504 | - | | 0.1713 | 3400 | 3.8409 | - | | 0.1764 | 3500 | 3.8436 | - | | 0.1814 | 3600 | 3.8422 | - | | 0.1864 | 3700 | 3.8629 | - | | 0.1915 | 3800 | 3.8589 | - | | 0.1965 | 3900 | 3.8572 | - | | 0.2016 | 4000 | 3.8309 | - | | 0.2066 | 4100 | 3.8465 | - | | 0.2116 | 4200 | 3.8311 | - | | 0.2167 | 4300 | 3.8124 | - | | 0.2217 | 4400 | 3.8412 | - | | 0.2267 | 4500 | 3.8228 | - | | 0.2318 | 4600 | 3.8012 | - | | 0.2368 | 4700 | 3.8185 | - | | 0.2419 | 4800 | 3.8242 | - | | 0.2469 | 4900 | 3.7917 | - | | 0.2519 | 5000 | 3.8022 | - | | 0.2570 | 5100 | 3.7991 | - | | 0.2620 | 5200 | 3.7943 | - | | 0.2671 | 5300 | 3.7874 | - | | 0.2721 | 5400 | 3.7987 | - | | 0.2771 | 5500 | 3.7982 | - | | 0.2822 | 5600 | 3.7789 | - | | 0.2872 | 5700 | 3.7837 | - | | 0.2923 | 5800 | 3.7762 | - | | 0.2973 | 5900 | 3.7854 | - | | 0.3023 | 6000 | 3.7719 | - | | 0.3074 | 6100 | 3.7925 | - | | 0.3124 | 6200 | 3.7795 | - | | 0.3174 | 6300 | 3.7725 | - | | 0.3225 | 6400 | 3.7897 | - | | 0.3275 | 6500 | 3.773 | - | | 0.3326 | 6600 | 3.7803 | - | | 0.3376 | 6700 | 3.7476 | - | | 0.3426 | 6800 | 3.7585 | - | | 0.3477 | 6900 | 3.7426 | - | | 0.3527 | 7000 | 3.7529 | - | | 0.3578 | 7100 | 3.7745 | - | | 0.3628 | 7200 | 3.7771 | - | | 0.3678 | 7300 | 3.7598 | - | | 0.3729 | 7400 | 3.7428 | - | | 0.3779 | 7500 | 3.7409 | - | | 0.3829 | 7600 | 3.7569 | - | | 0.3880 | 7700 | 3.7517 | - | | 0.3930 | 7800 | 3.7484 | - | | 0.3981 | 7900 | 3.7415 | - | | 0.4031 | 8000 | 3.7228 | - | | 0.4081 | 8100 | 3.7569 | - | | 0.4132 | 8200 | 3.7421 | - | | 0.4182 | 8300 | 3.7233 | - | | 0.4233 | 8400 | 3.72 | - | | 0.4283 | 8500 | 3.7431 | - | | 0.4333 | 8600 | 3.7258 | - | | 0.4384 | 8700 | 3.73 | - | | 0.4434 | 8800 | 3.7286 | - | | 0.4485 | 8900 | 3.7487 | - | | 0.4535 | 9000 | 3.7359 | - | | 0.4585 | 9100 | 3.7387 | - | | 0.4636 | 9200 | 3.7135 | - | | 0.4686 | 9300 | 3.7219 | - | | 0.4736 | 9400 | 3.7189 | - | | 0.4787 | 9500 | 3.7234 | - | | 0.4837 | 9600 | 3.7333 | - | | 0.4888 | 9700 | 3.7027 | - | | 0.4938 | 9800 | 3.7358 | - | | 0.4988 | 9900 | 3.6959 | - | | 0.5039 | 10000 | 3.7051 | - | | 0.5089 | 10100 | 3.7205 | - | | 0.5140 | 10200 | 3.711 | - | | 0.5190 | 10300 | 3.6898 | - | | 0.5240 | 10400 | 3.7103 | - | | 0.5291 | 10500 | 3.695 | - | | 0.5341 | 10600 | 3.7108 | - | | 0.5392 | 10700 | 3.7226 | - | | 0.5442 | 10800 | 3.7004 | - | | 0.5492 | 10900 | 3.736 | - | | 0.5543 | 11000 | 3.7135 | - | | 0.5593 | 11100 | 3.7148 | - | | 0.5643 | 11200 | 3.7285 | - | | 0.5694 | 11300 | 3.694 | - | | 0.5744 | 11400 | 3.6913 | - | | 0.5795 | 11500 | 3.69 | - | | 0.5845 | 11600 | 3.7249 | - | | 0.5895 | 11700 | 3.6907 | - | | 0.5946 | 11800 | 3.7135 | - | | 0.5996 | 11900 | 3.7172 | - | | 0.6047 | 12000 | 3.7087 | - | | 0.6097 | 12100 | 3.7045 | - | | 0.6147 | 12200 | 3.7043 | - | | 0.6198 | 12300 | 3.693 | - | | 0.6248 | 12400 | 3.6982 | - | | 0.6298 | 12500 | 3.6922 | - | | 0.6349 | 12600 | 3.6857 | - | | 0.6399 | 12700 | 3.6834 | - | | 0.6450 | 12800 | 3.7052 | - | | 0.6500 | 12900 | 3.6935 | - | | 0.6550 | 13000 | 3.6736 | - | | 0.6601 | 13100 | 3.7026 | - | | 0.6651 | 13200 | 3.6846 | - | | 0.6702 | 13300 | 3.704 | - | | 0.6752 | 13400 | 3.6818 | - | | 0.6802 | 13500 | 3.7075 | - | | 0.6853 | 13600 | 3.6688 | - | | 0.6903 | 13700 | 3.6933 | - | | 0.6954 | 13800 | 3.6971 | - | | 0.7004 | 13900 | 3.6785 | - | | 0.7054 | 14000 | 3.7088 | - | | 0.7105 | 14100 | 3.7127 | - | | 0.7155 | 14200 | 3.6996 | - | | 0.7205 | 14300 | 3.6901 | - | | 0.7256 | 14400 | 3.6914 | - | | 0.7306 | 14500 | 3.6659 | - | | 0.7357 | 14600 | 3.6859 | - | | 0.7407 | 14700 | 3.68 | - | | 0.7457 | 14800 | 3.6874 | - | | 0.7508 | 14900 | 3.6854 | - | | 0.7558 | 15000 | 3.671 | - | | 0.7609 | 15100 | 3.6909 | - | | 0.7659 | 15200 | 3.7014 | - | | 0.7709 | 15300 | 3.6828 | - | | 0.7760 | 15400 | 3.6773 | - | | 0.7810 | 15500 | 3.6863 | - | | 0.7861 | 15600 | 3.6892 | - | | 0.7911 | 15700 | 3.6864 | - | | 0.7961 | 15800 | 3.6586 | - | | 0.8012 | 15900 | 3.6639 | - | | 0.8062 | 16000 | 3.6843 | - | | 0.8112 | 16100 | 3.6865 | - | | 0.8163 | 16200 | 3.678 | - | | 0.8213 | 16300 | 3.6825 | - | | 0.8264 | 16400 | 3.7068 | - | | 0.8314 | 16500 | 3.6886 | - | | 0.8364 | 16600 | 3.6905 | - | | 0.8415 | 16700 | 3.6905 | - | | 0.8465 | 16800 | 3.6677 | - | | 0.8516 | 16900 | 3.684 | - | | 0.8566 | 17000 | 3.6872 | - | | 0.8616 | 17100 | 3.6849 | - | | 0.8667 | 17200 | 3.662 | - | | 0.8717 | 17300 | 3.6887 | - | | 0.8768 | 17400 | 3.6999 | - | | 0.8818 | 17500 | 3.6916 | - | | 0.8868 | 17600 | 3.6853 | - | | 0.8919 | 17700 | 3.6971 | - | | 0.8969 | 17800 | 3.6846 | - | | 0.9019 | 17900 | 3.6701 | - | | 0.9070 | 18000 | 3.6911 | - | | 0.9120 | 18100 | 3.7021 | - | | 0.9171 | 18200 | 3.6851 | - | | 0.9221 | 18300 | 3.6924 | - | | 0.9271 | 18400 | 3.6644 | - | | 0.9322 | 18500 | 3.6674 | - | | 0.9372 | 18600 | 3.6962 | - | | 0.9423 | 18700 | 3.6759 | - | | 0.9473 | 18800 | 3.6839 | - | | 0.9523 | 18900 | 3.6822 | - | | 0.9574 | 19000 | 3.6947 | - | | 0.9624 | 19100 | 3.6589 | - | | 0.9674 | 19200 | 3.6817 | - | | 0.9725 | 19300 | 3.6754 | - | | 0.9775 | 19400 | 3.6947 | - | | 0.9826 | 19500 | 3.6785 | - | | 0.9876 | 19600 | 3.6776 | - | | 0.9926 | 19700 | 3.6791 | - | | 0.9977 | 19800 | 3.6795 | - | | 1.0 | 19846 | - | 0.9971 | </details> ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.0.1 - Transformers: 4.40.0 - PyTorch: 2.3.0+cu121 - Accelerate: 0.33.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### TripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on BAAI/bge-base-en-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Nutanix/bge-base-mbpp") # Run inference sentences = [ 'Write a function to find sum and average of first n natural numbers.', 'def sum_average(number):\r\n total = 0\r\n for value in range(1, number + 1):\r\n total = total + value\r\n average = total / number\r\n return (total,average)', 'def long_words(n, str):\r\n word_len = []\r\n txt = str.split(" ")\r\n for x in txt:\r\n if len(x) > n:\r\n word_len.append(x)\r\n return word_len\t', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `sts-dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:-------------------|:-----------| | cosine_accuracy | 0.9971 | | dot_accuracy | 0.0028 | | manhattan_accuracy | 0.9961 | | euclidean_accuracy | 0.9971 | | **max_accuracy** | **0.9971** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | sts-dev_max_accuracy | |:------:|:-----:|:-------------:|:--------------------:| | 0.0050 | 100 | 4.3364 | - | | 0.0101 | 200 | 4.122 | - | | 0.0151 | 300 | 4.0825 | - | | 0.0202 | 400 | 4.0381 | - | | 0.0252 | 500 | 4.015 | - | | 0.0302 | 600 | 3.9996 | - | | 0.0353 | 700 | 3.9567 | - | | 0.0403 | 800 | 3.9593 | - | | 0.0453 | 900 | 3.9456 | - | | 0.0504 | 1000 | 3.938 | - | | 0.0554 | 1100 | 3.933 | - | | 0.0605 | 1200 | 3.905 | - | | 0.0655 | 1300 | 3.906 | - | | 0.0705 | 1400 | 3.9073 | - | | 0.0756 | 1500 | 3.9193 | - | | 0.0806 | 1600 | 3.9016 | - | | 0.0857 | 1700 | 3.8899 | - | | 0.0907 | 1800 | 3.9 | - | | 0.0957 | 1900 | 3.8983 | - | | 0.1008 | 2000 | 3.876 | - | | 0.1058 | 2100 | 3.9001 | - | | 0.1109 | 2200 | 3.8818 | - | | 0.1159 | 2300 | 3.8788 | - | | 0.1209 | 2400 | 3.8815 | - | | 0.1260 | 2500 | 3.8664 | - | | 0.1310 | 2600 | 3.854 | - | | 0.1360 | 2700 | 3.8674 | - | | 0.1411 | 2800 | 3.8525 | - | | 0.1461 | 2900 | 3.8733 | - | | 0.1512 | 3000 | 3.8538 | - | | 0.1562 | 3100 | 3.8348 | - | | 0.1612 | 3200 | 3.8378 | - | | 0.1663 | 3300 | 3.8504 | - | | 0.1713 | 3400 | 3.8409 | - | | 0.1764 | 3500 | 3.8436 | - | | 0.1814 | 3600 | 3.8422 | - | | 0.1864 | 3700 | 3.8629 | - | | 0.1915 | 3800 | 3.8589 | - | | 0.1965 | 3900 | 3.8572 | - | | 0.2016 | 4000 | 3.8309 | - | | 0.2066 | 4100 | 3.8465 | - | | 0.2116 | 4200 | 3.8311 | - | | 0.2167 | 4300 | 3.8124 | - | | 0.2217 | 4400 | 3.8412 | - | | 0.2267 | 4500 | 3.8228 | - | | 0.2318 | 4600 | 3.8012 | - | | 0.2368 | 4700 | 3.8185 | - | | 0.2419 | 4800 | 3.8242 | - | | 0.2469 | 4900 | 3.7917 | - | | 0.2519 | 5000 | 3.8022 | - | | 0.2570 | 5100 | 3.7991 | - | | 0.2620 | 5200 | 3.7943 | - | | 0.2671 | 5300 | 3.7874 | - | | 0.2721 | 5400 | 3.7987 | - | | 0.2771 | 5500 | 3.7982 | - | | 0.2822 | 5600 | 3.7789 | - | | 0.2872 | 5700 | 3.7837 | - | | 0.2923 | 5800 | 3.7762 | - | | 0.2973 | 5900 | 3.7854 | - | | 0.3023 | 6000 | 3.7719 | - | | 0.3074 | 6100 | 3.7925 | - | | 0.3124 | 6200 | 3.7795 | - | | 0.3174 | 6300 | 3.7725 | - | | 0.3225 | 6400 | 3.7897 | - | | 0.3275 | 6500 | 3.773 | - | | 0.3326 | 6600 | 3.7803 | - | | 0.3376 | 6700 | 3.7476 | - | | 0.3426 | 6800 | 3.7585 | - | | 0.3477 | 6900 | 3.7426 | - | | 0.3527 | 7000 | 3.7529 | - | | 0.3578 | 7100 | 3.7745 | - | | 0.3628 | 7200 | 3.7771 | - | | 0.3678 | 7300 | 3.7598 | - | | 0.3729 | 7400 | 3.7428 | - | | 0.3779 | 7500 | 3.7409 | - | | 0.3829 | 7600 | 3.7569 | - | | 0.3880 | 7700 | 3.7517 | - | | 0.3930 | 7800 | 3.7484 | - | | 0.3981 | 7900 | 3.7415 | - | | 0.4031 | 8000 | 3.7228 | - | | 0.4081 | 8100 | 3.7569 | - | | 0.4132 | 8200 | 3.7421 | - | | 0.4182 | 8300 | 3.7233 | - | | 0.4233 | 8400 | 3.72 | - | | 0.4283 | 8500 | 3.7431 | - | | 0.4333 | 8600 | 3.7258 | - | | 0.4384 | 8700 | 3.73 | - | | 0.4434 | 8800 | 3.7286 | - | | 0.4485 | 8900 | 3.7487 | - | | 0.4535 | 9000 | 3.7359 | - | | 0.4585 | 9100 | 3.7387 | - | | 0.4636 | 9200 | 3.7135 | - | | 0.4686 | 9300 | 3.7219 | - | | 0.4736 | 9400 | 3.7189 | - | | 0.4787 | 9500 | 3.7234 | - | | 0.4837 | 9600 | 3.7333 | - | | 0.4888 | 9700 | 3.7027 | - | | 0.4938 | 9800 | 3.7358 | - | | 0.4988 | 9900 | 3.6959 | - | | 0.5039 | 10000 | 3.7051 | - | | 0.5089 | 10100 | 3.7205 | - | | 0.5140 | 10200 | 3.711 | - | | 0.5190 | 10300 | 3.6898 | - | | 0.5240 | 10400 | 3.7103 | - | | 0.5291 | 10500 | 3.695 | - | | 0.5341 | 10600 | 3.7108 | - | | 0.5392 | 10700 | 3.7226 | - | | 0.5442 | 10800 | 3.7004 | - | | 0.5492 | 10900 | 3.736 | - | | 0.5543 | 11000 | 3.7135 | - | | 0.5593 | 11100 | 3.7148 | - | | 0.5643 | 11200 | 3.7285 | - | | 0.5694 | 11300 | 3.694 | - | | 0.5744 | 11400 | 3.6913 | - | | 0.5795 | 11500 | 3.69 | - | | 0.5845 | 11600 | 3.7249 | - | | 0.5895 | 11700 | 3.6907 | - | | 0.5946 | 11800 | 3.7135 | - | | 0.5996 | 11900 | 3.7172 | - | | 0.6047 | 12000 | 3.7087 | - | | 0.6097 | 12100 | 3.7045 | - | | 0.6147 | 12200 | 3.7043 | - | | 0.6198 | 12300 | 3.693 | - | | 0.6248 | 12400 | 3.6982 | - | | 0.6298 | 12500 | 3.6922 | - | | 0.6349 | 12600 | 3.6857 | - | | 0.6399 | 12700 | 3.6834 | - | | 0.6450 | 12800 | 3.7052 | - | | 0.6500 | 12900 | 3.6935 | - | | 0.6550 | 13000 | 3.6736 | - | | 0.6601 | 13100 | 3.7026 | - | | 0.6651 | 13200 | 3.6846 | - | | 0.6702 | 13300 | 3.704 | - | | 0.6752 | 13400 | 3.6818 | - | | 0.6802 | 13500 | 3.7075 | - | | 0.6853 | 13600 | 3.6688 | - | | 0.6903 | 13700 | 3.6933 | - | | 0.6954 | 13800 | 3.6971 | - | | 0.7004 | 13900 | 3.6785 | - | | 0.7054 | 14000 | 3.7088 | - | | 0.7105 | 14100 | 3.7127 | - | | 0.7155 | 14200 | 3.6996 | - | | 0.7205 | 14300 | 3.6901 | - | | 0.7256 | 14400 | 3.6914 | - | | 0.7306 | 14500 | 3.6659 | - | | 0.7357 | 14600 | 3.6859 | - | | 0.7407 | 14700 | 3.68 | - | | 0.7457 | 14800 | 3.6874 | - | | 0.7508 | 14900 | 3.6854 | - | | 0.7558 | 15000 | 3.671 | - | | 0.7609 | 15100 | 3.6909 | - | | 0.7659 | 15200 | 3.7014 | - | | 0.7709 | 15300 | 3.6828 | - | | 0.7760 | 15400 | 3.6773 | - | | 0.7810 | 15500 | 3.6863 | - | | 0.7861 | 15600 | 3.6892 | - | | 0.7911 | 15700 | 3.6864 | - | | 0.7961 | 15800 | 3.6586 | - | | 0.8012 | 15900 | 3.6639 | - | | 0.8062 | 16000 | 3.6843 | - | | 0.8112 | 16100 | 3.6865 | - | | 0.8163 | 16200 | 3.678 | - | | 0.8213 | 16300 | 3.6825 | - | | 0.8264 | 16400 | 3.7068 | - | | 0.8314 | 16500 | 3.6886 | - | | 0.8364 | 16600 | 3.6905 | - | | 0.8415 | 16700 | 3.6905 | - | | 0.8465 | 16800 | 3.6677 | - | | 0.8516 | 16900 | 3.684 | - | | 0.8566 | 17000 | 3.6872 | - | | 0.8616 | 17100 | 3.6849 | - | | 0.8667 | 17200 | 3.662 | - | | 0.8717 | 17300 | 3.6887 | - | | 0.8768 | 17400 | 3.6999 | - | | 0.8818 | 17500 | 3.6916 | - | | 0.8868 | 17600 | 3.6853 | - | | 0.8919 | 17700 | 3.6971 | - | | 0.8969 | 17800 | 3.6846 | - | | 0.9019 | 17900 | 3.6701 | - | | 0.9070 | 18000 | 3.6911 | - | | 0.9120 | 18100 | 3.7021 | - | | 0.9171 | 18200 | 3.6851 | - | | 0.9221 | 18300 | 3.6924 | - | | 0.9271 | 18400 | 3.6644 | - | | 0.9322 | 18500 | 3.6674 | - | | 0.9372 | 18600 | 3.6962 | - | | 0.9423 | 18700 | 3.6759 | - | | 0.9473 | 18800 | 3.6839 | - | | 0.9523 | 18900 | 3.6822 | - | | 0.9574 | 19000 | 3.6947 | - | | 0.9624 | 19100 | 3.6589 | - | | 0.9674 | 19200 | 3.6817 | - | | 0.9725 | 19300 | 3.6754 | - | | 0.9775 | 19400 | 3.6947 | - | | 0.9826 | 19500 | 3.6785 | - | | 0.9876 | 19600 | 3.6776 | - | | 0.9926 | 19700 | 3.6791 | - | | 0.9977 | 19800 | 3.6795 | - | | 1.0 | 19846 | - | 0.9971 | </details> ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.0.1 - Transformers: 4.40.0 - PyTorch: 2.3.0+cu121 - Accelerate: 0.33.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### TripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-base-en-v1.5", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy", "dot_accuracy", "manhattan_accuracy", "euclidean_accuracy", "max_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:317521", "loss:TripletLoss"], "widget": [{"source_sentence": "Write a function to extract every specified element from a given two dimensional list.", "sentences": ["def nCr_mod_p(n, r, p): \r\n\tif (r > n- r): \r\n\t\tr = n - r \r\n\tC = [0 for i in range(r + 1)] \r\n\tC[0] = 1 \r\n\tfor i in range(1, n + 1): \r\n\t\tfor j in range(min(i, r), 0, -1): \r\n\t\t\tC[j] = (C[j] + C[j-1]) % p \r\n\treturn C[r] ", "import cmath\r\ndef len_complex(a,b):\r\n cn=complex(a,b)\r\n length=abs(cn)\r\n return length", "def specified_element(nums, N):\r\n result = [i[N] for i in nums]\r\n return result"]}, {"source_sentence": "Write a python function to find the kth element in an array containing odd elements first and then even elements.", "sentences": ["def get_Number(n, k): \r\n arr = [0] * n; \r\n i = 0; \r\n odd = 1; \r\n while (odd <= n): \r\n arr[i] = odd; \r\n i += 1; \r\n odd += 2;\r\n even = 2; \r\n while (even <= n): \r\n arr[i] = even; \r\n i += 1;\r\n even += 2; \r\n return arr[k - 1]; ", "def sort_matrix(M):\r\n result = sorted(M, key=sum)\r\n return result", "INT_BITS = 32\r\ndef left_Rotate(n,d): \r\n return (n << d)|(n >> (INT_BITS - d)) "]}, {"source_sentence": "Write a function to remove all the words with k length in the given string.", "sentences": ["def remove_tuples(test_list, K):\r\n res = [ele for ele in test_list if len(ele) != K]\r\n return (res) ", "def is_Sub_Array(A,B,n,m): \r\n i = 0; j = 0; \r\n while (i < n and j < m): \r\n if (A[i] == B[j]): \r\n i += 1; \r\n j += 1; \r\n if (j == m): \r\n return True; \r\n else: \r\n i = i - j + 1; \r\n j = 0; \r\n return False; ", "def remove_length(test_str, K):\r\n temp = test_str.split()\r\n res = [ele for ele in temp if len(ele) != K]\r\n res = ' '.join(res)\r\n return (res) "]}, {"source_sentence": "Write a function to find the occurence of characters 'std' in the given string 1. list item 1. list item 1. list item 2. list item 2. list item 2. list item", "sentences": ["def magic_square_test(my_matrix):\r\n iSize = len(my_matrix[0])\r\n sum_list = []\r\n sum_list.extend([sum (lines) for lines in my_matrix]) \r\n for col in range(iSize):\r\n sum_list.append(sum(row[col] for row in my_matrix))\r\n result1 = 0\r\n for i in range(0,iSize):\r\n result1 +=my_matrix[i][i]\r\n sum_list.append(result1) \r\n result2 = 0\r\n for i in range(iSize-1,-1,-1):\r\n result2 +=my_matrix[i][i]\r\n sum_list.append(result2)\r\n if len(set(sum_list))>1:\r\n return False\r\n return True", "def count_occurance(s):\r\n count=0\r\n for i in range(len(s)):\r\n if (s[i]== 's' and s[i+1]=='t' and s[i+2]== 'd'):\r\n count = count + 1\r\n return count", "def power(a,b):\r\n\tif b==0:\r\n\t\treturn 1\r\n\telif a==0:\r\n\t\treturn 0\r\n\telif b==1:\r\n\t\treturn a\r\n\telse:\r\n\t\treturn a*power(a,b-1)"]}, {"source_sentence": "Write a function to find sum and average of first n natural numbers.", "sentences": ["def long_words(n, str):\r\n word_len = []\r\n txt = str.split(\" \")\r\n for x in txt:\r\n if len(x) > n:\r\n word_len.append(x)\r\n return word_len\t", "def long_words(n, str):\r\n word_len = []\r\n txt = str.split(\" \")\r\n for x in txt:\r\n if len(x) > n:\r\n word_len.append(x)\r\n return word_len\t", "def sum_average(number):\r\n total = 0\r\n for value in range(1, number + 1):\r\n total = total + value\r\n average = total / number\r\n return (total,average)"]}], "model-index": [{"name": "SentenceTransformer based on BAAI/bge-base-en-v1.5", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "cosine_accuracy", "value": 0.997141408425864, "name": "Cosine Accuracy"}, {"type": "dot_accuracy", "value": 0.0028145001873883936, "name": "Dot Accuracy"}, {"type": "manhattan_accuracy", "value": 0.99605382088609, "name": "Manhattan Accuracy"}, {"type": "euclidean_accuracy", "value": 0.997141408425864, "name": "Euclidean Accuracy"}, {"type": "max_accuracy", "value": 0.997141408425864, "name": "Max Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,961
HPLT/sft-fpft-ru-pythia-6.9b-deduped
HPLT
text-generation
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generation", "question answering", "instruction tuning", "ru", "arxiv:2309.08958", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-05T08:05:28Z
2025-04-06T08:37:59+00:00
8
0
--- language: - ru license: cc-by-nc-4.0 tags: - generation - question answering - instruction tuning --- ### Model Description This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [pythia-6.9b-deduped](https://huggingface.co/pythia-6.9b-deduped) * Instruction tuning language: Russian * Training method: full-parameter fine-tuning. * Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded using `transformers` library. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
null
Non_BioNLP
### Model Description This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [pythia-6.9b-deduped](https://huggingface.co/pythia-6.9b-deduped) * Instruction tuning language: Russian * Training method: full-parameter fine-tuning. * Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded using `transformers` library. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
{"language": ["ru"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
task
[ "QUESTION_ANSWERING" ]
42,962
mini1013/master_cate_top_bt13_9
mini1013
text-classification
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "model-index", "region:us" ]
2024-12-29T15:54:42Z
2024-12-29T15:55:05+00:00
7
0
--- base_model: klue/roberta-base library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 밀크바오밥 오리지널 샴푸 베이비파우더 1L 09_트리트먼트 화이트머스크 1000ml (#M)화장품/미용>헤어케어>샴푸 AD > Naverstore > 화장품/미용 > 헤어케어 > 샴푸 > 약산성샴푸 - text: 무코타염색제 7박스+3박스+정품 트리트먼트 50g 1.카키브라운 (#M)바디/헤어>바디케어>바디케어세트 Gmarket > 뷰티 > 바디/헤어 > 바디케어 > 바디케어세트 - text: 1+1세트~(컨센+릴렉스마스크100ml) 에스테티카 데미지 케어 컨센트레이트 120ml (열활성 열보호 에센스) 정품 + 릴렉스마스크100ml 1개 (#M)쿠팡 홈>싱글라이프>샤워/세안>헤어에센스 Coupang > 뷰티 > 헤어 > 헤어에센스/오일 > 헤어에센스 - text: 헤드스파7 트리트먼트 더 프리미엄 210ml + 210ml MinSellAmount (#M)바디/헤어>헤어케어>헤어트리트먼트 Gmarket > 뷰티 > 바디/헤어 > 헤어케어 > 헤어트리트먼트 - text: 헤어플러스 실크 코팅 트리트먼트 50ml 4개 실크 코팅 트리트먼트 50ml 4개 위메프 > 생활·주방·반려동물 > 바디/헤어 > 샴푸/린스/헤어케어 > 트리트먼트;위메프 > 생활·주방·반려동물 > 바디/헤어 > 샴푸/린스/헤어케어;위메프 > 뷰티 > 바디/헤어 > 샴푸/린스/헤어케어 > 샴푸/린스;(#M)위메프 > 생활·주방용품 > 바디/헤어 > 샴푸/린스/헤어케어 > 트리트먼트 위메프 > 뷰티 > 바디/헤어 > 샴푸/린스/헤어케어 > 트리트먼트 inference: true model-index: - name: SetFit with klue/roberta-base results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.8206115779645191 name: Accuracy --- # SetFit with klue/roberta-base This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | <ul><li>'로레알파리 토탈리페어5 트리트먼트 헤어팩 170ml × 1개 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 트리트먼트/헤어팩 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 트리트먼트/헤어팩'</li><li>'아모스 녹차실감 인텐시브 팩 250ml 녹차실감 인텐시브팩250g 홈>전체상품;(#M)홈>녹차실감 Naverstore > 화장품/미용 > 헤어케어 > 헤어팩'</li><li>'프리미엄 헤어클리닉 헤어팩 258ml 베이비파우더 LotteOn > 뷰티 > 헤어케어 > 헤어팩 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 트리트먼트/헤어팩'</li></ul> | | 0 | <ul><li>'퓨어시카 트리트먼트 베이비파우더향 1000ml 1개 MinSellAmount 스마일배송 홈>뷰티>바디케어>바디워시;스마일배송 홈>;(#M)스마일배송 홈>뷰티>헤어케어/스타일링>트리트먼트/팩 Gmarket > 뷰티 > 바디/헤어 > 바디케어 > 바디클렌저'</li><li>'1+1 살림백서 탈모 샴푸 엑티브B7 맥주효모 앤 비오틴 1000ml 남자 여자 바이오틴 4)오푼티아 트리트먼트 유칼립투스 1L (#M)화장품/미용>헤어케어>탈모케어 AD > Naverstore > 화장품/미용 > 가을뷰티 > 각질관리템 > 탈모샴푸'</li><li>'1+1 살림백서 오푼티아 퍼퓸 샴푸 500ml 약산성 비듬 지성 두피 볼륨 유칼립투스향 13.유칼립투스 트리트먼트 1+1 500ml (#M)화장품/미용>헤어케어>샴푸 AD > Naverstore > 화장품/미용 > 머스크 > 샴푸'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8206 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_top_bt13_9") # Run inference preds = model("무코타염색제 7박스+3박스+정품 트리트먼트 50g 1.카키브라운 (#M)바디/헤어>바디케어>바디케어세트 Gmarket > 뷰티 > 바디/헤어 > 바디케어 > 바디케어세트") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 14 | 23.76 | 98 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 50 | | 1 | 50 | ### Training Hyperparameters - batch_size: (64, 64) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 100 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0064 | 1 | 0.4326 | - | | 0.3185 | 50 | 0.3579 | - | | 0.6369 | 100 | 0.2616 | - | | 0.9554 | 150 | 0.0326 | - | | 1.2739 | 200 | 0.0 | - | | 1.5924 | 250 | 0.0 | - | | 1.9108 | 300 | 0.0 | - | | 2.2293 | 350 | 0.0 | - | | 2.5478 | 400 | 0.0 | - | | 2.8662 | 450 | 0.0 | - | | 3.1847 | 500 | 0.0 | - | | 3.5032 | 550 | 0.0 | - | | 3.8217 | 600 | 0.0 | - | | 4.1401 | 650 | 0.0 | - | | 4.4586 | 700 | 0.0 | - | | 4.7771 | 750 | 0.0 | - | | 5.0955 | 800 | 0.0 | - | | 5.4140 | 850 | 0.0 | - | | 5.7325 | 900 | 0.0 | - | | 6.0510 | 950 | 0.0 | - | | 6.3694 | 1000 | 0.0 | - | | 6.6879 | 1050 | 0.0 | - | | 7.0064 | 1100 | 0.0 | - | | 7.3248 | 1150 | 0.0 | - | | 7.6433 | 1200 | 0.0 | - | | 7.9618 | 1250 | 0.0 | - | | 8.2803 | 1300 | 0.0 | - | | 8.5987 | 1350 | 0.0 | - | | 8.9172 | 1400 | 0.0 | - | | 9.2357 | 1450 | 0.0 | - | | 9.5541 | 1500 | 0.0 | - | | 9.8726 | 1550 | 0.0 | - | | 10.1911 | 1600 | 0.0 | - | | 10.5096 | 1650 | 0.0 | - | | 10.8280 | 1700 | 0.0 | - | | 11.1465 | 1750 | 0.0 | - | | 11.4650 | 1800 | 0.0 | - | | 11.7834 | 1850 | 0.0 | - | | 12.1019 | 1900 | 0.0 | - | | 12.4204 | 1950 | 0.0 | - | | 12.7389 | 2000 | 0.0 | - | | 13.0573 | 2050 | 0.0 | - | | 13.3758 | 2100 | 0.0 | - | | 13.6943 | 2150 | 0.0 | - | | 14.0127 | 2200 | 0.0 | - | | 14.3312 | 2250 | 0.0 | - | | 14.6497 | 2300 | 0.0 | - | | 14.9682 | 2350 | 0.0 | - | | 15.2866 | 2400 | 0.0 | - | | 15.6051 | 2450 | 0.0 | - | | 15.9236 | 2500 | 0.0 | - | | 16.2420 | 2550 | 0.0 | - | | 16.5605 | 2600 | 0.0 | - | | 16.8790 | 2650 | 0.0 | - | | 17.1975 | 2700 | 0.0 | - | | 17.5159 | 2750 | 0.0 | - | | 17.8344 | 2800 | 0.0 | - | | 18.1529 | 2850 | 0.0 | - | | 18.4713 | 2900 | 0.0 | - | | 18.7898 | 2950 | 0.0 | - | | 19.1083 | 3000 | 0.0 | - | | 19.4268 | 3050 | 0.0 | - | | 19.7452 | 3100 | 0.0 | - | | 20.0637 | 3150 | 0.0 | - | | 20.3822 | 3200 | 0.0 | - | | 20.7006 | 3250 | 0.0 | - | | 21.0191 | 3300 | 0.0 | - | | 21.3376 | 3350 | 0.0 | - | | 21.6561 | 3400 | 0.0 | - | | 21.9745 | 3450 | 0.0 | - | | 22.2930 | 3500 | 0.0 | - | | 22.6115 | 3550 | 0.0 | - | | 22.9299 | 3600 | 0.0 | - | | 23.2484 | 3650 | 0.0 | - | | 23.5669 | 3700 | 0.0 | - | | 23.8854 | 3750 | 0.0 | - | | 24.2038 | 3800 | 0.0 | - | | 24.5223 | 3850 | 0.0 | - | | 24.8408 | 3900 | 0.0 | - | | 25.1592 | 3950 | 0.0 | - | | 25.4777 | 4000 | 0.0 | - | | 25.7962 | 4050 | 0.0 | - | | 26.1146 | 4100 | 0.0 | - | | 26.4331 | 4150 | 0.0 | - | | 26.7516 | 4200 | 0.0 | - | | 27.0701 | 4250 | 0.0 | - | | 27.3885 | 4300 | 0.0 | - | | 27.7070 | 4350 | 0.0 | - | | 28.0255 | 4400 | 0.0 | - | | 28.3439 | 4450 | 0.0 | - | | 28.6624 | 4500 | 0.0 | - | | 28.9809 | 4550 | 0.0 | - | | 29.2994 | 4600 | 0.0 | - | | 29.6178 | 4650 | 0.0 | - | | 29.9363 | 4700 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with klue/roberta-base This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | <ul><li>'로레알파리 토탈리페어5 트리트먼트 헤어팩 170ml × 1개 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 트리트먼트/헤어팩 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 트리트먼트/헤어팩'</li><li>'아모스 녹차실감 인텐시브 팩 250ml 녹차실감 인텐시브팩250g 홈>전체상품;(#M)홈>녹차실감 Naverstore > 화장품/미용 > 헤어케어 > 헤어팩'</li><li>'프리미엄 헤어클리닉 헤어팩 258ml 베이비파우더 LotteOn > 뷰티 > 헤어케어 > 헤어팩 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 트리트먼트/헤어팩'</li></ul> | | 0 | <ul><li>'퓨어시카 트리트먼트 베이비파우더향 1000ml 1개 MinSellAmount 스마일배송 홈>뷰티>바디케어>바디워시;스마일배송 홈>;(#M)스마일배송 홈>뷰티>헤어케어/스타일링>트리트먼트/팩 Gmarket > 뷰티 > 바디/헤어 > 바디케어 > 바디클렌저'</li><li>'1+1 살림백서 탈모 샴푸 엑티브B7 맥주효모 앤 비오틴 1000ml 남자 여자 바이오틴 4)오푼티아 트리트먼트 유칼립투스 1L (#M)화장품/미용>헤어케어>탈모케어 AD > Naverstore > 화장품/미용 > 가을뷰티 > 각질관리템 > 탈모샴푸'</li><li>'1+1 살림백서 오푼티아 퍼퓸 샴푸 500ml 약산성 비듬 지성 두피 볼륨 유칼립투스향 13.유칼립투스 트리트먼트 1+1 500ml (#M)화장품/미용>헤어케어>샴푸 AD > Naverstore > 화장품/미용 > 머스크 > 샴푸'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8206 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_top_bt13_9") # Run inference preds = model("무코타염색제 7박스+3박스+정품 트리트먼트 50g 1.카키브라운 (#M)바디/헤어>바디케어>바디케어세트 Gmarket > 뷰티 > 바디/헤어 > 바디케어 > 바디케어세트") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 14 | 23.76 | 98 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 50 | | 1 | 50 | ### Training Hyperparameters - batch_size: (64, 64) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 100 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0064 | 1 | 0.4326 | - | | 0.3185 | 50 | 0.3579 | - | | 0.6369 | 100 | 0.2616 | - | | 0.9554 | 150 | 0.0326 | - | | 1.2739 | 200 | 0.0 | - | | 1.5924 | 250 | 0.0 | - | | 1.9108 | 300 | 0.0 | - | | 2.2293 | 350 | 0.0 | - | | 2.5478 | 400 | 0.0 | - | | 2.8662 | 450 | 0.0 | - | | 3.1847 | 500 | 0.0 | - | | 3.5032 | 550 | 0.0 | - | | 3.8217 | 600 | 0.0 | - | | 4.1401 | 650 | 0.0 | - | | 4.4586 | 700 | 0.0 | - | | 4.7771 | 750 | 0.0 | - | | 5.0955 | 800 | 0.0 | - | | 5.4140 | 850 | 0.0 | - | | 5.7325 | 900 | 0.0 | - | | 6.0510 | 950 | 0.0 | - | | 6.3694 | 1000 | 0.0 | - | | 6.6879 | 1050 | 0.0 | - | | 7.0064 | 1100 | 0.0 | - | | 7.3248 | 1150 | 0.0 | - | | 7.6433 | 1200 | 0.0 | - | | 7.9618 | 1250 | 0.0 | - | | 8.2803 | 1300 | 0.0 | - | | 8.5987 | 1350 | 0.0 | - | | 8.9172 | 1400 | 0.0 | - | | 9.2357 | 1450 | 0.0 | - | | 9.5541 | 1500 | 0.0 | - | | 9.8726 | 1550 | 0.0 | - | | 10.1911 | 1600 | 0.0 | - | | 10.5096 | 1650 | 0.0 | - | | 10.8280 | 1700 | 0.0 | - | | 11.1465 | 1750 | 0.0 | - | | 11.4650 | 1800 | 0.0 | - | | 11.7834 | 1850 | 0.0 | - | | 12.1019 | 1900 | 0.0 | - | | 12.4204 | 1950 | 0.0 | - | | 12.7389 | 2000 | 0.0 | - | | 13.0573 | 2050 | 0.0 | - | | 13.3758 | 2100 | 0.0 | - | | 13.6943 | 2150 | 0.0 | - | | 14.0127 | 2200 | 0.0 | - | | 14.3312 | 2250 | 0.0 | - | | 14.6497 | 2300 | 0.0 | - | | 14.9682 | 2350 | 0.0 | - | | 15.2866 | 2400 | 0.0 | - | | 15.6051 | 2450 | 0.0 | - | | 15.9236 | 2500 | 0.0 | - | | 16.2420 | 2550 | 0.0 | - | | 16.5605 | 2600 | 0.0 | - | | 16.8790 | 2650 | 0.0 | - | | 17.1975 | 2700 | 0.0 | - | | 17.5159 | 2750 | 0.0 | - | | 17.8344 | 2800 | 0.0 | - | | 18.1529 | 2850 | 0.0 | - | | 18.4713 | 2900 | 0.0 | - | | 18.7898 | 2950 | 0.0 | - | | 19.1083 | 3000 | 0.0 | - | | 19.4268 | 3050 | 0.0 | - | | 19.7452 | 3100 | 0.0 | - | | 20.0637 | 3150 | 0.0 | - | | 20.3822 | 3200 | 0.0 | - | | 20.7006 | 3250 | 0.0 | - | | 21.0191 | 3300 | 0.0 | - | | 21.3376 | 3350 | 0.0 | - | | 21.6561 | 3400 | 0.0 | - | | 21.9745 | 3450 | 0.0 | - | | 22.2930 | 3500 | 0.0 | - | | 22.6115 | 3550 | 0.0 | - | | 22.9299 | 3600 | 0.0 | - | | 23.2484 | 3650 | 0.0 | - | | 23.5669 | 3700 | 0.0 | - | | 23.8854 | 3750 | 0.0 | - | | 24.2038 | 3800 | 0.0 | - | | 24.5223 | 3850 | 0.0 | - | | 24.8408 | 3900 | 0.0 | - | | 25.1592 | 3950 | 0.0 | - | | 25.4777 | 4000 | 0.0 | - | | 25.7962 | 4050 | 0.0 | - | | 26.1146 | 4100 | 0.0 | - | | 26.4331 | 4150 | 0.0 | - | | 26.7516 | 4200 | 0.0 | - | | 27.0701 | 4250 | 0.0 | - | | 27.3885 | 4300 | 0.0 | - | | 27.7070 | 4350 | 0.0 | - | | 28.0255 | 4400 | 0.0 | - | | 28.3439 | 4450 | 0.0 | - | | 28.6624 | 4500 | 0.0 | - | | 28.9809 | 4550 | 0.0 | - | | 29.2994 | 4600 | 0.0 | - | | 29.6178 | 4650 | 0.0 | - | | 29.9363 | 4700 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "klue/roberta-base", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "밀크바오밥 오리지널 샴푸 베이비파우더 1L 09_트리트먼트 화이트머스크 1000ml (#M)화장품/미용>헤어케어>샴푸 AD > Naverstore > 화장품/미용 > 헤어케어 > 샴푸 > 약산성샴푸"}, {"text": "무코타염색제 7박스+3박스+정품 트리트먼트 50g 1.카키브라운 (#M)바디/헤어>바디케어>바디케어세트 Gmarket > 뷰티 > 바디/헤어 > 바디케어 > 바디케어세트"}, {"text": "1+1세트~(컨센+릴렉스마스크100ml) 에스테티카 데미지 케어 컨센트레이트 120ml (열활성 열보호 에센스) 정품 + 릴렉스마스크100ml 1개 (#M)쿠팡 홈>싱글라이프>샤워/세안>헤어에센스 Coupang > 뷰티 > 헤어 > 헤어에센스/오일 > 헤어에센스"}, {"text": "헤드스파7 트리트먼트 더 프리미엄 210ml + 210ml MinSellAmount (#M)바디/헤어>헤어케어>헤어트리트먼트 Gmarket > 뷰티 > 바디/헤어 > 헤어케어 > 헤어트리트먼트"}, {"text": "헤어플러스 실크 코팅 트리트먼트 50ml 4개 실크 코팅 트리트먼트 50ml 4개 위메프 > 생활·주방·반려동물 > 바디/헤어 > 샴푸/린스/헤어케어 > 트리트먼트;위메프 > 생활·주방·반려동물 > 바디/헤어 > 샴푸/린스/헤어케어;위메프 > 뷰티 > 바디/헤어 > 샴푸/린스/헤어케어 > 샴푸/린스;(#M)위메프 > 생활·주방용품 > 바디/헤어 > 샴푸/린스/헤어케어 > 트리트먼트 위메프 > 뷰티 > 바디/헤어 > 샴푸/린스/헤어케어 > 트리트먼트"}], "inference": true, "model-index": [{"name": "SetFit with klue/roberta-base", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8206115779645191, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,963
petchbks01/mt5-small-finetuned-mt5-small-v2
petchbks01
summarization
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-23T05:16:56Z
2024-04-24T00:01:53+00:00
10
0
--- base_model: google/mt5-small license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: mt5-small-finetuned-mt5-small-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-mt5-small-v2 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3614 - Rouge1: 0.1151 - Rouge2: 0.0251 - Rougel: 0.1143 - Rougelsum: 0.1144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:| | 3.5787 | 1.0 | 12500 | 2.7685 | 0.0863 | 0.0192 | 0.0858 | 0.0857 | | 3.036 | 2.0 | 25000 | 2.6270 | 0.0911 | 0.0203 | 0.0905 | 0.0905 | | 2.8761 | 3.0 | 37500 | 2.5564 | 0.101 | 0.0233 | 0.1004 | 0.1004 | | 2.7709 | 4.0 | 50000 | 2.5080 | 0.1034 | 0.0231 | 0.1028 | 0.1028 | | 2.6959 | 5.0 | 62500 | 2.4671 | 0.1068 | 0.0235 | 0.1061 | 0.1062 | | 2.6328 | 6.0 | 75000 | 2.4539 | 0.11 | 0.026 | 0.1093 | 0.1093 | | 2.5839 | 7.0 | 87500 | 2.4302 | 0.1101 | 0.0261 | 0.1092 | 0.1093 | | 2.5418 | 8.0 | 100000 | 2.4083 | 0.1113 | 0.0252 | 0.1106 | 0.1108 | | 2.5067 | 9.0 | 112500 | 2.3999 | 0.1115 | 0.0257 | 0.1107 | 0.1106 | | 2.4762 | 10.0 | 125000 | 2.3857 | 0.1161 | 0.0264 | 0.1153 | 0.1153 | | 2.4505 | 11.0 | 137500 | 2.3741 | 0.1141 | 0.0262 | 0.1133 | 0.1134 | | 2.4281 | 12.0 | 150000 | 2.3737 | 0.1153 | 0.0259 | 0.1146 | 0.1147 | | 2.4103 | 13.0 | 162500 | 2.3648 | 0.1156 | 0.0255 | 0.1148 | 0.1147 | | 2.3961 | 14.0 | 175000 | 2.3652 | 0.1131 | 0.0246 | 0.1123 | 0.1123 | | 2.3837 | 15.0 | 187500 | 2.3636 | 0.1141 | 0.0255 | 0.1133 | 0.1134 | | 2.3772 | 16.0 | 200000 | 2.3614 | 0.1151 | 0.0251 | 0.1143 | 0.1144 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.0+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-mt5-small-v2 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3614 - Rouge1: 0.1151 - Rouge2: 0.0251 - Rougel: 0.1143 - Rougelsum: 0.1144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:| | 3.5787 | 1.0 | 12500 | 2.7685 | 0.0863 | 0.0192 | 0.0858 | 0.0857 | | 3.036 | 2.0 | 25000 | 2.6270 | 0.0911 | 0.0203 | 0.0905 | 0.0905 | | 2.8761 | 3.0 | 37500 | 2.5564 | 0.101 | 0.0233 | 0.1004 | 0.1004 | | 2.7709 | 4.0 | 50000 | 2.5080 | 0.1034 | 0.0231 | 0.1028 | 0.1028 | | 2.6959 | 5.0 | 62500 | 2.4671 | 0.1068 | 0.0235 | 0.1061 | 0.1062 | | 2.6328 | 6.0 | 75000 | 2.4539 | 0.11 | 0.026 | 0.1093 | 0.1093 | | 2.5839 | 7.0 | 87500 | 2.4302 | 0.1101 | 0.0261 | 0.1092 | 0.1093 | | 2.5418 | 8.0 | 100000 | 2.4083 | 0.1113 | 0.0252 | 0.1106 | 0.1108 | | 2.5067 | 9.0 | 112500 | 2.3999 | 0.1115 | 0.0257 | 0.1107 | 0.1106 | | 2.4762 | 10.0 | 125000 | 2.3857 | 0.1161 | 0.0264 | 0.1153 | 0.1153 | | 2.4505 | 11.0 | 137500 | 2.3741 | 0.1141 | 0.0262 | 0.1133 | 0.1134 | | 2.4281 | 12.0 | 150000 | 2.3737 | 0.1153 | 0.0259 | 0.1146 | 0.1147 | | 2.4103 | 13.0 | 162500 | 2.3648 | 0.1156 | 0.0255 | 0.1148 | 0.1147 | | 2.3961 | 14.0 | 175000 | 2.3652 | 0.1131 | 0.0246 | 0.1123 | 0.1123 | | 2.3837 | 15.0 | 187500 | 2.3636 | 0.1141 | 0.0255 | 0.1133 | 0.1134 | | 2.3772 | 16.0 | 200000 | 2.3614 | 0.1151 | 0.0251 | 0.1143 | 0.1144 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.0+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"base_model": "google/mt5-small", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-mt5-small-v2", "results": []}]}
task
[ "SUMMARIZATION" ]
42,964
Behpouyan/Behpouyan-Fill-Mask
Behpouyan
fill-mask
[ "transformers", "safetensors", "bert", "fill-mask", "Fill Mask", "Persian", "BERT", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-12-05T14:50:37Z
2024-12-05T15:03:15+00:00
66
0
--- library_name: transformers tags: - Fill Mask - Persian - BERT --- ## Model Details ### Model Description This model is fine-tuned for the task of masked language modeling in Persian. The model can predict missing words in Persian sentences when a word is replaced by the [MASK] token. It is useful for a range of NLP applications, including text completion, question answering, and contextual understanding of Persian texts. - **Developed by:** Behpouyan - **Model type:** Encoder - **Language(s) (NLP):** Persian ## How to Get Started with the Model ``` python from transformers import AutoTokenizer, AutoModelForMaskedLM import torch # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("Behpouyan/Behpouyan-Fill-Mask") model = AutoModelForMaskedLM.from_pretrained("Behpouyan/Behpouyan-Fill-Mask") # List of 5 Persian sentences with a masked word (replacing a word with [MASK]) sentences = [ "این کتاب بسیار <mask> است.", # The book is very <mask "مشتری همیشه از <mask> شما راضی است.", # The customer is always satisfied with your <mask "من به دنبال <mask> هستم.", # I am looking for <mask "این پروژه نیاز به <mask> دارد.", # This project needs <mask "تیم ما برای انجام کارها <mask> است." # Our team is <mask to do the tasks ] # Function to predict masked words def predict_masked_word(sentence): # Tokenize the input sentence inputs = tokenizer(sentence, return_tensors="pt") # Forward pass to get logits with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits # Get the position of the [MASK] token mask_token_index = torch.where(inputs.input_ids == tokenizer.mask_token_id)[1].item() # Get the predicted token predicted_token_id = torch.argmax(logits[0, mask_token_index]).item() predicted_word = tokenizer.decode([predicted_token_id]) return predicted_word # Test the model on the sentences for sentence in sentences: predicted_word = predict_masked_word(sentence) print(f"Sentence: {sentence}") print(f"Predicted word: {predicted_word}") print("-" * 50) ```
null
Non_BioNLP
## Model Details ### Model Description This model is fine-tuned for the task of masked language modeling in Persian. The model can predict missing words in Persian sentences when a word is replaced by the [MASK] token. It is useful for a range of NLP applications, including text completion, question answering, and contextual understanding of Persian texts. - **Developed by:** Behpouyan - **Model type:** Encoder - **Language(s) (NLP):** Persian ## How to Get Started with the Model ``` python from transformers import AutoTokenizer, AutoModelForMaskedLM import torch # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("Behpouyan/Behpouyan-Fill-Mask") model = AutoModelForMaskedLM.from_pretrained("Behpouyan/Behpouyan-Fill-Mask") # List of 5 Persian sentences with a masked word (replacing a word with [MASK]) sentences = [ "این کتاب بسیار <mask> است.", # The book is very <mask "مشتری همیشه از <mask> شما راضی است.", # The customer is always satisfied with your <mask "من به دنبال <mask> هستم.", # I am looking for <mask "این پروژه نیاز به <mask> دارد.", # This project needs <mask "تیم ما برای انجام کارها <mask> است." # Our team is <mask to do the tasks ] # Function to predict masked words def predict_masked_word(sentence): # Tokenize the input sentence inputs = tokenizer(sentence, return_tensors="pt") # Forward pass to get logits with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits # Get the position of the [MASK] token mask_token_index = torch.where(inputs.input_ids == tokenizer.mask_token_id)[1].item() # Get the predicted token predicted_token_id = torch.argmax(logits[0, mask_token_index]).item() predicted_word = tokenizer.decode([predicted_token_id]) return predicted_word # Test the model on the sentences for sentence in sentences: predicted_word = predict_masked_word(sentence) print(f"Sentence: {sentence}") print(f"Predicted word: {predicted_word}") print("-" * 50) ```
{"library_name": "transformers", "tags": ["Fill Mask", "Persian", "BERT"]}
task
[ "QUESTION_ANSWERING" ]
42,965
mhenrichsen/gemma-7b-it
mhenrichsen
text-generation
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-02-21T15:53:01Z
2024-02-21T16:02:51+00:00
1,227
0
--- library_name: transformers --- # Re-uploaded model. Original readme below. # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-it-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning the model You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`. In that repository, we provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(**input_text, return_tensors="pt") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "gg-hf/gemma-7b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype, ) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **54.0** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
null
Non_BioNLP
# Re-uploaded model. Original readme below. # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-it-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning the model You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`. In that repository, we provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(**input_text, return_tensors="pt") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "gg-hf/gemma-7b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype, ) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **54.0** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
{"library_name": "transformers"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
42,966
deepseek-ai/deepseek-vl2-tiny
deepseek-ai
image-text-to-text
[ "transformers", "safetensors", "deepseek_vl_v2", "image-text-to-text", "arxiv:2412.10302", "license:other", "endpoints_compatible", "region:us" ]
2024-12-13T08:49:22Z
2024-12-18T08:17:15+00:00
70,101
169
--- library_name: transformers license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-MODEL pipeline_tag: image-text-to-text --- ## 1. Introduction Introducing DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical character recognition, document/table/chart understanding, and visual grounding. Our model series is composed of three variants: DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2, with 1.0B, 2.8B and 4.5B activated parameters respectively. DeepSeek-VL2 achieves competitive or state-of-the-art performance with similar or fewer activated parameters compared to existing open-source dense and MoE-based models. [DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding](https://arxiv.org/abs/2412.10302) [**Github Repository**](https://github.com/deepseek-ai/DeepSeek-VL2) Zhiyu Wu*, Xiaokang Chen*, Zizheng Pan*, Xingchao Liu*, Wen Liu**, Damai Dai, Huazuo Gao, Yiyang Ma, Chengyue Wu, Bingxuan Wang, Zhenda Xie, Yu Wu, Kai Hu, Jiawei Wang, Yaofeng Sun, Yukun Li, Yishi Piao, Kang Guan, Aixin Liu, Xin Xie, Yuxiang You, Kai Dong, Xingkai Yu, Haowei Zhang, Liang Zhao, Yisong Wang, Chong Ruan*** (* Equal Contribution, ** Project Lead, *** Corresponding author) ![](https://github.com/deepseek-ai/DeepSeek-VL2/tree/main/images/vl2_teaser.jpeg) ### 2. Model Summary DeepSeek-VL2-tiny is built on DeepSeekMoE-3B (total activated parameters are 1.0B). ## 3. Quick Start ### Installation On the basis of `Python >= 3.8` environment, install the necessary dependencies by running the following command: ```shell pip install -e . ``` ### Notifications 1. We suggest to use a temperature T <= 0.7 when sampling. We observe a larger temperature decreases the generation quality. 2. To keep the number of tokens managable in the context window, we apply dynamic tiling strategy to <=2 images. When there are >=3 images, we directly pad the images to 384*384 as inputs without tiling. 3. The main difference between DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2 is the base LLM. ### Simple Inference Example ```python import torch from transformers import AutoModelForCausalLM from deepseek_vl.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM from deepseek_vl.utils.io import load_pil_images # specify the path to the model model_path = "deepseek-ai/deepseek-vl2-small" vl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path) tokenizer = vl_chat_processor.tokenizer vl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True) vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval() ## single image conversation example conversation = [ { "role": "<|User|>", "content": "<image>\n<|ref|>The giraffe at the back.<|/ref|>.", "images": ["./images/visual_grounding.jpeg"], }, {"role": "<|Assistant|>", "content": ""}, ] ## multiple images (or in-context learning) conversation example # conversation = [ # { # "role": "User", # "content": "<image_placeholder>A dog wearing nothing in the foreground, " # "<image_placeholder>a dog wearing a santa hat, " # "<image_placeholder>a dog wearing a wizard outfit, and " # "<image_placeholder>what's the dog wearing?", # "images": [ # "images/dog_a.png", # "images/dog_b.png", # "images/dog_c.png", # "images/dog_d.png", # ], # }, # {"role": "Assistant", "content": ""} # ] # load images and prepare for inputs pil_images = load_pil_images(conversation) prepare_inputs = vl_chat_processor( conversations=conversation, images=pil_images, force_batchify=True, system_prompt="" ).to(vl_gpt.device) # run image encoder to get the image embeddings inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs) # run the model to get the response outputs = vl_gpt.language_model.generate( inputs_embeds=inputs_embeds, attention_mask=prepare_inputs.attention_mask, pad_token_id=tokenizer.eos_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id, max_new_tokens=512, do_sample=False, use_cache=True ) answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True) print(f"{prepare_inputs['sft_format'][0]}", answer) ``` ### Gradio Demo (TODO) ## 4. License This code repository is licensed under [MIT License](./LICENSE-CODE). The use of DeepSeek-VL2 models is subject to [DeepSeek Model License](./LICENSE-MODEL). DeepSeek-VL2 series supports commercial use. ## 5. Citation ``` @misc{wu2024deepseekvl2mixtureofexpertsvisionlanguagemodels, title={DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding}, author={Zhiyu Wu and Xiaokang Chen and Zizheng Pan and Xingchao Liu and Wen Liu and Damai Dai and Huazuo Gao and Yiyang Ma and Chengyue Wu and Bingxuan Wang and Zhenda Xie and Yu Wu and Kai Hu and Jiawei Wang and Yaofeng Sun and Yukun Li and Yishi Piao and Kang Guan and Aixin Liu and Xin Xie and Yuxiang You and Kai Dong and Xingkai Yu and Haowei Zhang and Liang Zhao and Yisong Wang and Chong Ruan}, year={2024}, eprint={2412.10302}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2412.10302}, } ``` ## 6. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
null
Non_BioNLP
## 1. Introduction Introducing DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical character recognition, document/table/chart understanding, and visual grounding. Our model series is composed of three variants: DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2, with 1.0B, 2.8B and 4.5B activated parameters respectively. DeepSeek-VL2 achieves competitive or state-of-the-art performance with similar or fewer activated parameters compared to existing open-source dense and MoE-based models. [DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding](https://arxiv.org/abs/2412.10302) [**Github Repository**](https://github.com/deepseek-ai/DeepSeek-VL2) Zhiyu Wu*, Xiaokang Chen*, Zizheng Pan*, Xingchao Liu*, Wen Liu**, Damai Dai, Huazuo Gao, Yiyang Ma, Chengyue Wu, Bingxuan Wang, Zhenda Xie, Yu Wu, Kai Hu, Jiawei Wang, Yaofeng Sun, Yukun Li, Yishi Piao, Kang Guan, Aixin Liu, Xin Xie, Yuxiang You, Kai Dong, Xingkai Yu, Haowei Zhang, Liang Zhao, Yisong Wang, Chong Ruan*** (* Equal Contribution, ** Project Lead, *** Corresponding author) ![](https://github.com/deepseek-ai/DeepSeek-VL2/tree/main/images/vl2_teaser.jpeg) ### 2. Model Summary DeepSeek-VL2-tiny is built on DeepSeekMoE-3B (total activated parameters are 1.0B). ## 3. Quick Start ### Installation On the basis of `Python >= 3.8` environment, install the necessary dependencies by running the following command: ```shell pip install -e . ``` ### Notifications 1. We suggest to use a temperature T <= 0.7 when sampling. We observe a larger temperature decreases the generation quality. 2. To keep the number of tokens managable in the context window, we apply dynamic tiling strategy to <=2 images. When there are >=3 images, we directly pad the images to 384*384 as inputs without tiling. 3. The main difference between DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2 is the base LLM. ### Simple Inference Example ```python import torch from transformers import AutoModelForCausalLM from deepseek_vl.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM from deepseek_vl.utils.io import load_pil_images # specify the path to the model model_path = "deepseek-ai/deepseek-vl2-small" vl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path) tokenizer = vl_chat_processor.tokenizer vl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True) vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval() ## single image conversation example conversation = [ { "role": "<|User|>", "content": "<image>\n<|ref|>The giraffe at the back.<|/ref|>.", "images": ["./images/visual_grounding.jpeg"], }, {"role": "<|Assistant|>", "content": ""}, ] ## multiple images (or in-context learning) conversation example # conversation = [ # { # "role": "User", # "content": "<image_placeholder>A dog wearing nothing in the foreground, " # "<image_placeholder>a dog wearing a santa hat, " # "<image_placeholder>a dog wearing a wizard outfit, and " # "<image_placeholder>what's the dog wearing?", # "images": [ # "images/dog_a.png", # "images/dog_b.png", # "images/dog_c.png", # "images/dog_d.png", # ], # }, # {"role": "Assistant", "content": ""} # ] # load images and prepare for inputs pil_images = load_pil_images(conversation) prepare_inputs = vl_chat_processor( conversations=conversation, images=pil_images, force_batchify=True, system_prompt="" ).to(vl_gpt.device) # run image encoder to get the image embeddings inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs) # run the model to get the response outputs = vl_gpt.language_model.generate( inputs_embeds=inputs_embeds, attention_mask=prepare_inputs.attention_mask, pad_token_id=tokenizer.eos_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id, max_new_tokens=512, do_sample=False, use_cache=True ) answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True) print(f"{prepare_inputs['sft_format'][0]}", answer) ``` ### Gradio Demo (TODO) ## 4. License This code repository is licensed under [MIT License](./LICENSE-CODE). The use of DeepSeek-VL2 models is subject to [DeepSeek Model License](./LICENSE-MODEL). DeepSeek-VL2 series supports commercial use. ## 5. Citation ``` @misc{wu2024deepseekvl2mixtureofexpertsvisionlanguagemodels, title={DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding}, author={Zhiyu Wu and Xiaokang Chen and Zizheng Pan and Xingchao Liu and Wen Liu and Damai Dai and Huazuo Gao and Yiyang Ma and Chengyue Wu and Bingxuan Wang and Zhenda Xie and Yu Wu and Kai Hu and Jiawei Wang and Yaofeng Sun and Yukun Li and Yishi Piao and Kang Guan and Aixin Liu and Xin Xie and Yuxiang You and Kai Dong and Xingkai Yu and Haowei Zhang and Liang Zhao and Yisong Wang and Chong Ruan}, year={2024}, eprint={2412.10302}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2412.10302}, } ``` ## 6. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
{"library_name": "transformers", "license": "other", "license_name": "deepseek", "license_link": "https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-MODEL", "pipeline_tag": "image-text-to-text"}
task
[ "QUESTION_ANSWERING" ]
42,967
nectec/Pathumma-llm-vision-1.0.0
nectec
visual-question-answering
[ "safetensors", "idefics3", "visual-question-answering", "th", "en", "arxiv:2408.12637", "base_model:HuggingFaceM4/Idefics3-8B-Llama3", "base_model:finetune:HuggingFaceM4/Idefics3-8B-Llama3", "region:us" ]
2024-10-24T08:47:47Z
2024-10-25T03:48:44+00:00
27
9
--- base_model: - HuggingFaceM4/Idefics3-8B-Llama3 language: - th - en metrics: - sacrebleu pipeline_tag: visual-question-answering --- # Pathumma-llm-vision-1.0.0 ## Model Overview Pathumma-llm-vision-1.0.0 is a multi-modal language model fine-tuned for Visual Question Answering (VQA) and Image Captioning tasks. It contains 8 billion parameters and leverages both image and text processing to understand and generate multi-modal content. - **Model Name**: Pathumma-llm-vision-1.0.0 - **Base Model**: HuggingFaceM4/Idefics3-8B-Llama3 - **Architecture**: Multi-modal LLM (Visual Language Model) - **Parameters**: 8 Billion - **Organization**: NECTEC - **License**: [Specify License] ## Intended Use - **Primary Use Cases**: - Visual Question Answering (VQA) - Image Captioning - **Intended Users**: Developers, researchers, and AI practitioners working on multi-modal tasks. - **Possible Applications**: Educational tools, accessibility applications, interactive visual content generation. ## Model Description Pathumma-llm-vision-1.0.0 is designed to perform multi-modal tasks by integrating both visual and textual information. The model is fine-tuned with diverse datasets to improve its ability to understand and generate content that aligns with both image and text inputs. ## Training Data The model was fine-tuned on several datasets: - **Thai Image Caption**: Data sourced from image captioning competitions on Kaggle. - **Thai Shorthand Dataset**: Data related to the Thai language. - **ShareGPT-4o (translated into Thai)**: Data translated from GPT-4o-mini outputs into Thai. - **Small-Thai-Wikipedia-location**: Articles in Thai from Wikipedia about geographic locations. - **Synthetic Data**: Additional synthetic data generated to increase dataset diversity. ### Dataset Size - **Training Dataset Size**: 112,768 examples - **Validation Dataset Size**: 9,036 examples ## Training Details - **Hardware Used**: - **HPC Cluster**: Lanta - **Number of Nodes**: 16 Nodes - **GPUs per Node**: 4 GPUs - **Total GPUs Used**: 64 GPUs - **Fine-tuning Duration**: 3 hours, 18 minutes, and 11 seconds (excluding evaluation) ## Evaluation Results | Type | Encoder | Decoder | IPU24-dataset <br>(test) <br>(Sentence SacreBLEU) | |----------------------------------------|------------------------------------|-------------------------------------|-------------------------------| | Idefic3-8B-Llama3 | siglip-so400m-patch14-384 | Meta-Llama-3.1-8B-Instruct | 0.02657 | | Pathumma-llm-vision-beta-0.0.0 | siglip-so400m-patch14-384 | Meta-Llama-3.1-8B-Instruct | 13.45412 | | Pathumma-llm-vision-1.0.0 | siglip-so400m-patch14-384 | Meta-Llama-3.1-8B-Instruct | **17.66370** | | llama-3-typhoon-v1.5-8b-vision-preview | siglip-so400m-patch14-384 | Llama-3-Typhoon-1.5-8B-instruct | 8.288626 | **\*\*Note**: Other models not target fine-tuned on IPU24-datasets may be less representative of IPU24 performance. - **Accuracy on VQA Tasks with testing a private dataset**: 30.34% ## Required Libraries Before you start, ensure you have the following libraries installed: ``` pip install git+https://github.com/andimarafioti/transformers.git@idefics3 ``` ## Usage We provide a [inference tutorial](https://colab.research.google.com/drive/1TakNg4v6hHFXLih-SFcibxzYBTs2-EFn?usp=sharing). To use the model with the Hugging Face `transformers` library: ```python import io import os import time import random import requests import shutil from IPython.display import display, Markdown from IPython.display import clear_output as cls import numpy as np import pandas as pd from PIL import Image import torch import transformers from transformers import ( Idefics3ForConditionalGeneration, AutoProcessor, BitsAndBytesConfig, ) ``` ```python DEVICE = f"cuda" if torch.cuda.is_available() else 'cpu' if torch.cpu.is_available() else 'mps' print(DEVICE) if DEVICE == 'cuda': display(torch.cuda.device_count()) N = 5 revision = "quantized8bit" processor = AutoProcessor.from_pretrained( "nectec/Pathumma-llm-vision-1.0.0", revision=revision, # Optional do_image_splitting=False, # size={"longest_edge": N*364}, # Optional # size={"height": N*364, "width": N*364}, # Optional ) model = Idefics3ForConditionalGeneration.from_pretrained( "nectec/Pathumma-llm-vision-1.0.0", revision=revision, # Optional torch_dtype=torch.float16, device_map=DEVICE ) print(processor.image_processor.size) url_path = None local_path = "./path/picture.jpg" if not url_path else io.BytesIO(requests.get(url_path).content) image = Image.open(local_path) question = "รายละเอียดของรูปภาพนี้" messages = [ { "role": "user", "content": [ {"type": "text", "text": "You are a helpful assistant."}, {"type": "image"}, {"type": "text", "text": question} ] } ] text = processor.apply_chat_template( messages, add_generation_prompt=True, ) encoding = processor( images=image, text=text.strip(), # padding='max_length', # truncation=True, # max_length=, return_tensors="pt" ) encoding = {k: v.to(DEVICE) for k, v in encoding.items()} # Example: Run inference on text input start_time = time.time() model.eval() with torch.inference_mode(): # Generate generated_ids = model.generate( **inputs, max_new_tokens=128, # temperature=.5, # repetition_penalty=1., # # top_k=1., # top_p=1, ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] end_time = time.time() ## Get letency_time... latency_time = end_time - start_time answer_prompt = generated_text.split('Assistant:')[1].strip() # Output processing (depends on task requirements) print(answer_prompt) print(f"latency_time: {latency_time:.3f} sec.") # >>> output: # >>> ลูกฮิปโปแคระกำลังยืนอยู่ข้างแม่ฮิปโปแคระที่กำลังอาบน้ำ # >>> latency_time: 7.642 sec. ``` ## Limitations and Biases - The model may exhibit biases due to the training data, which might not be fully representative of all contexts. - Performance may degrade on unfamiliar images or non-standard question formats. ## Ethical Considerations - The model should not be used to generate misleading information or in ways that violate privacy. - Consider fairness and minimize bias when using the model for language and image processing tasks. ## Citation If you use this model, please cite it as follows: ```bibtex @misc{PathummaVision, author = {Thirawarit Pitiphiphat and NECTEC Team}, title = {nectec/Pathumma-llm-vision-1.0.0}, year = {2024}, url = {https://huggingface.co/nectec/Pathumma-llm-vision-1.0.0} } ``` ```bibtex @misc{laurençon2024building, title={Building and better understanding vision-language models: insights and future directions.}, author={Hugo Laurençon and Andrés Marafioti and Victor Sanh and Léo Tronchon}, year={2024}, eprint={2408.12637}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## **Contributor Contract** **LLM Team** Pakawat Phasook ([email protected])<br> Jessada Pranee ([email protected])<br> Arnon Saeoung ([email protected])<br> Kun Kerdthaisong ([email protected])<br> Kittisak Sukhantharat ([email protected])<br> Chaianun Damrongrat ([email protected])<br> Sarawoot Kongyoung ([email protected]) **Audio Team** Pattara Tipaksorn ([email protected])<br> Wayupuk Sommuang ([email protected])<br> Oatsada Chatthong ([email protected])<br> Kwanchiva Thangthai ([email protected]) **Vision Team** Thirawarit Pitiphiphat ([email protected])<br> Peerapas Ngokpon ([email protected])<br> Theerasit Issaranon ([email protected]) ## Contact For questions or support, please contact **https://discord.gg/3WJwJjZt7r**. ``` This formatting provides a clean, structured, and readable Markdown layout for these sections. Let me know if further adjustments are needed! ```
null
Non_BioNLP
# Pathumma-llm-vision-1.0.0 ## Model Overview Pathumma-llm-vision-1.0.0 is a multi-modal language model fine-tuned for Visual Question Answering (VQA) and Image Captioning tasks. It contains 8 billion parameters and leverages both image and text processing to understand and generate multi-modal content. - **Model Name**: Pathumma-llm-vision-1.0.0 - **Base Model**: HuggingFaceM4/Idefics3-8B-Llama3 - **Architecture**: Multi-modal LLM (Visual Language Model) - **Parameters**: 8 Billion - **Organization**: NECTEC - **License**: [Specify License] ## Intended Use - **Primary Use Cases**: - Visual Question Answering (VQA) - Image Captioning - **Intended Users**: Developers, researchers, and AI practitioners working on multi-modal tasks. - **Possible Applications**: Educational tools, accessibility applications, interactive visual content generation. ## Model Description Pathumma-llm-vision-1.0.0 is designed to perform multi-modal tasks by integrating both visual and textual information. The model is fine-tuned with diverse datasets to improve its ability to understand and generate content that aligns with both image and text inputs. ## Training Data The model was fine-tuned on several datasets: - **Thai Image Caption**: Data sourced from image captioning competitions on Kaggle. - **Thai Shorthand Dataset**: Data related to the Thai language. - **ShareGPT-4o (translated into Thai)**: Data translated from GPT-4o-mini outputs into Thai. - **Small-Thai-Wikipedia-location**: Articles in Thai from Wikipedia about geographic locations. - **Synthetic Data**: Additional synthetic data generated to increase dataset diversity. ### Dataset Size - **Training Dataset Size**: 112,768 examples - **Validation Dataset Size**: 9,036 examples ## Training Details - **Hardware Used**: - **HPC Cluster**: Lanta - **Number of Nodes**: 16 Nodes - **GPUs per Node**: 4 GPUs - **Total GPUs Used**: 64 GPUs - **Fine-tuning Duration**: 3 hours, 18 minutes, and 11 seconds (excluding evaluation) ## Evaluation Results | Type | Encoder | Decoder | IPU24-dataset <br>(test) <br>(Sentence SacreBLEU) | |----------------------------------------|------------------------------------|-------------------------------------|-------------------------------| | Idefic3-8B-Llama3 | siglip-so400m-patch14-384 | Meta-Llama-3.1-8B-Instruct | 0.02657 | | Pathumma-llm-vision-beta-0.0.0 | siglip-so400m-patch14-384 | Meta-Llama-3.1-8B-Instruct | 13.45412 | | Pathumma-llm-vision-1.0.0 | siglip-so400m-patch14-384 | Meta-Llama-3.1-8B-Instruct | **17.66370** | | llama-3-typhoon-v1.5-8b-vision-preview | siglip-so400m-patch14-384 | Llama-3-Typhoon-1.5-8B-instruct | 8.288626 | **\*\*Note**: Other models not target fine-tuned on IPU24-datasets may be less representative of IPU24 performance. - **Accuracy on VQA Tasks with testing a private dataset**: 30.34% ## Required Libraries Before you start, ensure you have the following libraries installed: ``` pip install git+https://github.com/andimarafioti/transformers.git@idefics3 ``` ## Usage We provide a [inference tutorial](https://colab.research.google.com/drive/1TakNg4v6hHFXLih-SFcibxzYBTs2-EFn?usp=sharing). To use the model with the Hugging Face `transformers` library: ```python import io import os import time import random import requests import shutil from IPython.display import display, Markdown from IPython.display import clear_output as cls import numpy as np import pandas as pd from PIL import Image import torch import transformers from transformers import ( Idefics3ForConditionalGeneration, AutoProcessor, BitsAndBytesConfig, ) ``` ```python DEVICE = f"cuda" if torch.cuda.is_available() else 'cpu' if torch.cpu.is_available() else 'mps' print(DEVICE) if DEVICE == 'cuda': display(torch.cuda.device_count()) N = 5 revision = "quantized8bit" processor = AutoProcessor.from_pretrained( "nectec/Pathumma-llm-vision-1.0.0", revision=revision, # Optional do_image_splitting=False, # size={"longest_edge": N*364}, # Optional # size={"height": N*364, "width": N*364}, # Optional ) model = Idefics3ForConditionalGeneration.from_pretrained( "nectec/Pathumma-llm-vision-1.0.0", revision=revision, # Optional torch_dtype=torch.float16, device_map=DEVICE ) print(processor.image_processor.size) url_path = None local_path = "./path/picture.jpg" if not url_path else io.BytesIO(requests.get(url_path).content) image = Image.open(local_path) question = "รายละเอียดของรูปภาพนี้" messages = [ { "role": "user", "content": [ {"type": "text", "text": "You are a helpful assistant."}, {"type": "image"}, {"type": "text", "text": question} ] } ] text = processor.apply_chat_template( messages, add_generation_prompt=True, ) encoding = processor( images=image, text=text.strip(), # padding='max_length', # truncation=True, # max_length=, return_tensors="pt" ) encoding = {k: v.to(DEVICE) for k, v in encoding.items()} # Example: Run inference on text input start_time = time.time() model.eval() with torch.inference_mode(): # Generate generated_ids = model.generate( **inputs, max_new_tokens=128, # temperature=.5, # repetition_penalty=1., # # top_k=1., # top_p=1, ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] end_time = time.time() ## Get letency_time... latency_time = end_time - start_time answer_prompt = generated_text.split('Assistant:')[1].strip() # Output processing (depends on task requirements) print(answer_prompt) print(f"latency_time: {latency_time:.3f} sec.") # >>> output: # >>> ลูกฮิปโปแคระกำลังยืนอยู่ข้างแม่ฮิปโปแคระที่กำลังอาบน้ำ # >>> latency_time: 7.642 sec. ``` ## Limitations and Biases - The model may exhibit biases due to the training data, which might not be fully representative of all contexts. - Performance may degrade on unfamiliar images or non-standard question formats. ## Ethical Considerations - The model should not be used to generate misleading information or in ways that violate privacy. - Consider fairness and minimize bias when using the model for language and image processing tasks. ## Citation If you use this model, please cite it as follows: ```bibtex @misc{PathummaVision, author = {Thirawarit Pitiphiphat and NECTEC Team}, title = {nectec/Pathumma-llm-vision-1.0.0}, year = {2024}, url = {https://huggingface.co/nectec/Pathumma-llm-vision-1.0.0} } ``` ```bibtex @misc{laurençon2024building, title={Building and better understanding vision-language models: insights and future directions.}, author={Hugo Laurençon and Andrés Marafioti and Victor Sanh and Léo Tronchon}, year={2024}, eprint={2408.12637}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## **Contributor Contract** **LLM Team** Pakawat Phasook ([email protected])<br> Jessada Pranee ([email protected])<br> Arnon Saeoung ([email protected])<br> Kun Kerdthaisong ([email protected])<br> Kittisak Sukhantharat ([email protected])<br> Chaianun Damrongrat ([email protected])<br> Sarawoot Kongyoung ([email protected]) **Audio Team** Pattara Tipaksorn ([email protected])<br> Wayupuk Sommuang ([email protected])<br> Oatsada Chatthong ([email protected])<br> Kwanchiva Thangthai ([email protected]) **Vision Team** Thirawarit Pitiphiphat ([email protected])<br> Peerapas Ngokpon ([email protected])<br> Theerasit Issaranon ([email protected]) ## Contact For questions or support, please contact **https://discord.gg/3WJwJjZt7r**. ``` This formatting provides a clean, structured, and readable Markdown layout for these sections. Let me know if further adjustments are needed! ```
{"base_model": ["HuggingFaceM4/Idefics3-8B-Llama3"], "language": ["th", "en"], "metrics": ["sacrebleu"], "pipeline_tag": "visual-question-answering"}
task
[ "QUESTION_ANSWERING" ]
42,968
mbazaNLP/Quantized_Nllb_Finetuned_Edu_En_Kin_8bit
mbazaNLP
translation
[ "transformers", "translation", "en", "rw", "dataset:mbazaNLP/NMT_Tourism_parallel_data_en_kin", "dataset:mbazaNLP/NMT_Education_parallel_data_en_kin", "dataset:mbazaNLP/Kinyarwanda_English_parallel_dataset", "license:cc-by-2.0", "endpoints_compatible", "region:us" ]
2023-10-02T09:33:06Z
2023-10-02T09:51:33+00:00
7
0
--- datasets: - mbazaNLP/NMT_Tourism_parallel_data_en_kin - mbazaNLP/NMT_Education_parallel_data_en_kin - mbazaNLP/Kinyarwanda_English_parallel_dataset language: - en - rw library_name: transformers license: cc-by-2.0 pipeline_tag: translation --- ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is a Machine Translation model, finetuned from [NLLB](https://huggingface.co/facebook/nllb-200-distilled-1.3B)-200's distilled 1.3B model, it is meant to be used in machine translation for education-related data. - **Finetuning code repository:** the code used to finetune this model can be found [here](https://github.com/Digital-Umuganda/twb_nllb_finetuning) ## Quantization details The model is quantized to 8-bit precision using the Ctranslate2 library. ``` pip install ctranslate2 ``` Using the command: ``` ct2-transformers-converter --model <model-dir> --quantization int8 --output_dir <output-model-dir> ``` <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## How to Get Started with the Model Use the code below to get started with the model. ### Training Procedure The model was finetuned on three datasets; a [general](https://huggingface.co/datasets/mbazaNLP/Kinyarwanda_English_parallel_dataset) purpose dataset, a [tourism](https://huggingface.co/datasets/mbazaNLP/NMT_Tourism_parallel_data_en_kin), and an [education](https://huggingface.co/datasets/mbazaNLP/NMT_Education_parallel_data_en_kin) dataset. The model was finetuned in two phases. #### Phase one: - General purpose dataset - Education dataset - Tourism dataset #### Phase two: - Education dataset Other than the dataset changes between phase one, and phase two finetuning; no other hyperparameters were modified. In both cases, the model was trained on an A100 40GB GPU for two epochs. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> <!-- This should link to a Data Card if possible. --> #### Metrics Model performance was measured using BLEU, spBLEU, TER, and chrF++ metrics. ### Results
null
Non_BioNLP
## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is a Machine Translation model, finetuned from [NLLB](https://huggingface.co/facebook/nllb-200-distilled-1.3B)-200's distilled 1.3B model, it is meant to be used in machine translation for education-related data. - **Finetuning code repository:** the code used to finetune this model can be found [here](https://github.com/Digital-Umuganda/twb_nllb_finetuning) ## Quantization details The model is quantized to 8-bit precision using the Ctranslate2 library. ``` pip install ctranslate2 ``` Using the command: ``` ct2-transformers-converter --model <model-dir> --quantization int8 --output_dir <output-model-dir> ``` <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## How to Get Started with the Model Use the code below to get started with the model. ### Training Procedure The model was finetuned on three datasets; a [general](https://huggingface.co/datasets/mbazaNLP/Kinyarwanda_English_parallel_dataset) purpose dataset, a [tourism](https://huggingface.co/datasets/mbazaNLP/NMT_Tourism_parallel_data_en_kin), and an [education](https://huggingface.co/datasets/mbazaNLP/NMT_Education_parallel_data_en_kin) dataset. The model was finetuned in two phases. #### Phase one: - General purpose dataset - Education dataset - Tourism dataset #### Phase two: - Education dataset Other than the dataset changes between phase one, and phase two finetuning; no other hyperparameters were modified. In both cases, the model was trained on an A100 40GB GPU for two epochs. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> <!-- This should link to a Data Card if possible. --> #### Metrics Model performance was measured using BLEU, spBLEU, TER, and chrF++ metrics. ### Results
{"datasets": ["mbazaNLP/NMT_Tourism_parallel_data_en_kin", "mbazaNLP/NMT_Education_parallel_data_en_kin", "mbazaNLP/Kinyarwanda_English_parallel_dataset"], "language": ["en", "rw"], "library_name": "transformers", "license": "cc-by-2.0", "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
42,969
Helsinki-NLP/opus-mt-toi-en
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "toi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T12:07:11+00:00
604
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-toi-en * source languages: toi * target languages: en * OPUS readme: [toi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.en | 39.0 | 0.539 |
null
Non_BioNLP
### opus-mt-toi-en * source languages: toi * target languages: en * OPUS readme: [toi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.en | 39.0 | 0.539 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
42,970
vidhi0206/setfit-paraphrase-mpnet-emotion
vidhi0206
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
2024-02-28T12:34:57Z
2024-03-06T09:41:22+00:00
5
0
--- base_model: sentence-transformers/paraphrase-mpnet-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: i honestly thought impossible at this point i feel pretty - text: i feel convinced that im going to shy away from whatever is really good for me - text: i feel guilt that i should be more caring and im not - text: i found myself feeling nostalgic as i thought about the temporarily abandoned little bishop chronicles - text: i am feeling very indecisive and spontaneous inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.5225 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | <ul><li>'i feel so much better about that number'</li><li>'i feel like i have reached a plateau where im not buying as much as i use to and feeling more satisfied with my wardrobe and personal style'</li><li>'i feel especially thankful'</li></ul> | | 3 | <ul><li>'i feel so violent just want to break some glass'</li><li>'i always feel rushed on the way to visit no comments'</li><li>'i think maybe about how strongly she feels about him and being there for him but brad looks really distracted'</li></ul> | | 5 | <ul><li>'i feel like when i was a kid it was constantly impressed upon me how awesome ants are'</li><li>'i feel like it s a boy i would be pretty shocked if it was so somewhere in there my gut or my brain is saying girl'</li><li>'i feel like every day i walk around with so much stress and sadness that im literally amazed im still here that i still function that im still basically a friendly stable person'</li></ul> | | 0 | <ul><li>'i would feel that a few words would be not only inadequate but a travesty'</li><li>'i attributed this depression to feeling inadequate against the unrealistic ideals of the lds church and while i still hold those ideals somewhat responsible i recognize this pattern of behavior'</li><li>'ive been resting and feeling generally unpleasant and queasy but in that frustrating background way where you dont feel right but cant place an exact cause'</li></ul> | | 4 | <ul><li>'i was starting to feel scared for both of their safety and i wish those officers hadn t left no matter how much i hated them'</li><li>'i am already feeling frantic'</li><li>'i believe in you moment we all feel til then it s one more skeptical song'</li></ul> | | 2 | <ul><li>'i do feel sympathetic to the parties involved now that their careers are down the drain'</li><li>'i like frappes and shit when im feeling naughty but i drink tea daily'</li><li>'i will pay a month for months and feel shame every time i grill a hot dog from that point on'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.5225 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-emotion") # Run inference preds = model("i am feeling very indecisive and spontaneous") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 4 | 19.3333 | 48 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 8 | | 1 | 8 | | 2 | 8 | | 3 | 8 | | 4 | 8 | | 5 | 8 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0042 | 1 | 0.3009 | - | | 0.2083 | 50 | 0.1916 | - | | 0.4167 | 100 | 0.0393 | - | | 0.625 | 150 | 0.0129 | - | | 0.8333 | 200 | 0.0034 | - | ### Framework Versions - Python: 3.8.10 - SetFit: 1.0.3 - Sentence Transformers: 2.3.1 - Transformers: 4.37.2 - PyTorch: 2.2.0+cu121 - Datasets: 2.17.0 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | <ul><li>'i feel so much better about that number'</li><li>'i feel like i have reached a plateau where im not buying as much as i use to and feeling more satisfied with my wardrobe and personal style'</li><li>'i feel especially thankful'</li></ul> | | 3 | <ul><li>'i feel so violent just want to break some glass'</li><li>'i always feel rushed on the way to visit no comments'</li><li>'i think maybe about how strongly she feels about him and being there for him but brad looks really distracted'</li></ul> | | 5 | <ul><li>'i feel like when i was a kid it was constantly impressed upon me how awesome ants are'</li><li>'i feel like it s a boy i would be pretty shocked if it was so somewhere in there my gut or my brain is saying girl'</li><li>'i feel like every day i walk around with so much stress and sadness that im literally amazed im still here that i still function that im still basically a friendly stable person'</li></ul> | | 0 | <ul><li>'i would feel that a few words would be not only inadequate but a travesty'</li><li>'i attributed this depression to feeling inadequate against the unrealistic ideals of the lds church and while i still hold those ideals somewhat responsible i recognize this pattern of behavior'</li><li>'ive been resting and feeling generally unpleasant and queasy but in that frustrating background way where you dont feel right but cant place an exact cause'</li></ul> | | 4 | <ul><li>'i was starting to feel scared for both of their safety and i wish those officers hadn t left no matter how much i hated them'</li><li>'i am already feeling frantic'</li><li>'i believe in you moment we all feel til then it s one more skeptical song'</li></ul> | | 2 | <ul><li>'i do feel sympathetic to the parties involved now that their careers are down the drain'</li><li>'i like frappes and shit when im feeling naughty but i drink tea daily'</li><li>'i will pay a month for months and feel shame every time i grill a hot dog from that point on'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.5225 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-emotion") # Run inference preds = model("i am feeling very indecisive and spontaneous") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 4 | 19.3333 | 48 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 8 | | 1 | 8 | | 2 | 8 | | 3 | 8 | | 4 | 8 | | 5 | 8 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0042 | 1 | 0.3009 | - | | 0.2083 | 50 | 0.1916 | - | | 0.4167 | 100 | 0.0393 | - | | 0.625 | 150 | 0.0129 | - | | 0.8333 | 200 | 0.0034 | - | ### Framework Versions - Python: 3.8.10 - SetFit: 1.0.3 - Sentence Transformers: 2.3.1 - Transformers: 4.37.2 - PyTorch: 2.2.0+cu121 - Datasets: 2.17.0 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "i honestly thought impossible at this point i feel pretty"}, {"text": "i feel convinced that im going to shy away from whatever is really good for me"}, {"text": "i feel guilt that i should be more caring and im not"}, {"text": "i found myself feeling nostalgic as i thought about the temporarily abandoned little bishop chronicles"}, {"text": "i am feeling very indecisive and spontaneous"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.5225, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,971
DanielVelaJ/distilbert-base-uncased-finetuned-emotion
DanielVelaJ
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-09-01T12:31:22Z
2023-02-24T19:33:24+00:00
20
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - type: accuracy value: 0.926 name: Accuracy - type: f1 value: 0.9258845524992532 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2137 - Accuracy: 0.926 - F1: 0.9259 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8322 | 1.0 | 250 | 0.3065 | 0.9105 | 0.9091 | | 0.2386 | 2.0 | 500 | 0.2137 | 0.926 | 0.9259 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2137 - Accuracy: 0.926 - F1: 0.9259 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8322 | 1.0 | 250 | 0.3065 | 0.9105 | 0.9091 | | 0.2386 | 2.0 | 500 | 0.2137 | 0.926 | 0.9259 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.926, "name": "Accuracy"}, {"type": "f1", "value": 0.9258845524992532, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,972
Kronk0s/autotrain-carbon-finbert-with-validation
Kronk0s
text-classification
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-22T06:33:36Z
2024-05-22T07:00:35+00:00
0
0
--- tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.9161890149116516 f1_macro: 0.5221418534756406 f1_micro: 0.6065573770491803 f1_weighted: 0.5878515563275015 precision_macro: 0.5509259259259259 precision_micro: 0.6065573770491803 precision_weighted: 0.5908469945355193 recall_macro: 0.5222294630896782 recall_micro: 0.6065573770491803 recall_weighted: 0.6065573770491803 accuracy: 0.6065573770491803
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.9161890149116516 f1_macro: 0.5221418534756406 f1_micro: 0.6065573770491803 f1_weighted: 0.5878515563275015 precision_macro: 0.5509259259259259 precision_micro: 0.6065573770491803 precision_weighted: 0.5908469945355193 recall_macro: 0.5222294630896782 recall_micro: 0.6065573770491803 recall_weighted: 0.6065573770491803 accuracy: 0.6065573770491803
{"tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
42,973
TheBloke/Claire-7B-0.1-GPTQ
TheBloke
text-generation
[ "transformers", "safetensors", "falcon", "text-generation", "pretrained", "conversational", "custom_code", "fr", "base_model:OpenLLM-France/Claire-7B-0.1", "base_model:quantized:OpenLLM-France/Claire-7B-0.1", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
2023-11-14T11:54:52Z
2023-11-14T13:33:43+00:00
41
1
--- base_model: OpenLLM-France/Claire-7B-0.1 language: - fr license: cc-by-nc-sa-4.0 model_name: Claire 7B 0.1 pipeline_tag: text-generation tags: - pretrained - conversational inference: false model_creator: OpenLLM France model_type: falcon prompt_template: '- Bonjour BotName, {prompt} - Bonjour UserName, ' quantized_by: TheBloke widget: - example_title: Request for a recipe group: Dash text: '- Bonjour Dominique, qu''allez-vous nous cuisiner aujourd''hui ? - Bonjour Camille,' - example_title: Request for a recipe group: Intervenant text: '[Intervenant 1:] Bonjour Dominique, qu''allez-vous nous cuisiner aujourd''hui ? [Intervenant 2:] Bonjour Camille,' - example_title: Request for a recipe group: FirstName text: '[Camille:] Bonjour Dominique, qu''allez-vous nous cuisiner aujourd''hui ? [Dominique:] Bonjour Camille,' - example_title: Request for a recipe group: Named text: '[Camille Durand:] Bonjour Dominique, qu''allez-vous nous cuisiner aujourd''hui ? [Dominique Petit:] Bonjour Camille,' --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Claire 7B 0.1 - GPTQ - Model creator: [OpenLLM France](https://huggingface.co/OpenLLM-France) - Original model: [Claire 7B 0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1) <!-- description start --> # Description This repo contains GPTQ model files for [OpenLLM France's Claire 7B 0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Claire-7B-0.1-GGUF) * [OpenLLM France's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenLLM-France/Claire-7B-0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: OpenLLM-France ``` - Bonjour BotName, {prompt} - Bonjour UserName, ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.04 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.43 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.99 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.96 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.94 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.17 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Claire-7B-0.1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Claire-7B-0.1-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Claire-7B-0.1-GPTQ`: ```shell mkdir Claire-7B-0.1-GPTQ huggingface-cli download TheBloke/Claire-7B-0.1-GPTQ --local-dir Claire-7B-0.1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Claire-7B-0.1-GPTQ huggingface-cli download TheBloke/Claire-7B-0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Claire-7B-0.1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Claire-7B-0.1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Claire-7B-0.1-GPTQ --local-dir Claire-7B-0.1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Claire-7B-0.1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Claire-7B-0.1-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Claire-7B-0.1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Claire-7B-0.1-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''- Bonjour BotName, {prompt} - Bonjour UserName, ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Claire-7B-0.1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''- Bonjour BotName, {prompt} - Bonjour UserName, ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: OpenLLM France's Claire 7B 0.1 # Claire-7B-0.1 **Claire-7B-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) and [OpenLLM-France](https://github.com/OpenLLM-France)** **adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on French conversational data.** Claire-7B-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language. ## Typical usage ```python import transformers import torch model_name = "OpenLLM-France/Claire-7B-0.1" tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) model = transformers.AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16, load_in_4bit=True # For efficient inference, if supported by the GPU card ) pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) generation_kwargs = dict( num_return_sequences=1, # Number of variants to generate. return_full_text= False, # Do not include the prompt in the generated text. max_new_tokens=200, # Maximum length for the output text. do_sample=True, top_k=10, temperature=1.0, # Sampling parameters. pad_token_id=tokenizer.eos_token_id, # Just to avoid a harmless warning. ) prompt = """\ - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? - Bonjour Camille,\ """ completions = pipeline(prompt, **generation_kwargs) for completion in completions: print(prompt + " […]" + completion['generated_text']) ``` This will print something like: ``` - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? - Bonjour Camille, […] je vous prépare un plat de saison, une daube provençale. - Ah je ne connais pas cette recette. - C'est très facile à préparer, vous n'avez qu'à mettre de l'eau dans une marmite, y mettre de l'oignon émincé, des carottes coupées en petits morceaux, et vous allez mettre votre viande de bœuf coupé en petits morceaux également. - Je n'ai jamais cuisiné de viande de bœuf, mais c'est vrai que ça a l'air bien facile. - Vous n'avez plus qu'à laisser mijoter, et ensuite il sera temps de servir les clients. - Très bien. ``` You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization). If you have trouble running this code, make sure you have recent versions of `torch`, `transformers` and `accelerate` (see [requirements.txt](requirements.txt)). ### Typical prompts Claire-7B-0.1 was trained on diarized French conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows: A monologue can be specified as a single line prompt (though keep in mind that Claire might still return a dialogue because of its training): ```python prompt = "Mesdames et messieurs les députés, chers collègues, bonsoir. Vous l'aurez peut-être remarqué, je cite rarement" ``` A dialogue between two speakers can be specified with one line per speech turn starting with a dash: ```python prompt = """\ - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? - Bonjour Camille,\ """ ``` A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Intervenant X:]` where `X` is a number: ```python prompt = """\ [Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? [Intervenant 2:] Bonjour Camille,\ """ ``` A dialogue or multilogue with named speakers can be specified with lines that start with `[SpeakerName:]` where `SpeakerName` can be a first name, a first and a last name, a nickname, a title… ```python prompt = """\ [Mme Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? [Mr. Dominique Petit:] Bonjour Camille,\ """ ``` ## Training Details ### Training Data Claire-7B-0.1 was tuned from Falcon-7b on the following data distribution: | **Data type** | **Words** | **Training Sampling Weight** | **Sources** | |-------------------------------|------------|------------------------------|-----------------------------------------------------| | Parliamentary Proceedings | 135M | 35% | assemblee-nationale.fr | | Theatre | 16M | 18% | theatre-classique.fr, theatregratuit.com | | Interviews | 6.4M | 29% | TCOF, CFPP, CFPB, ACSYNT, PFC, Valibel (ORFEO), ESLO | | Free Conversations | 2.2M | 10% | CRFP, OFROM, CID, Rhapsodie, ParisStories, PFC, CLAPI, C-ORAL-ROM (ORFEO), LinTO, ESLO | | Meetings | 1.2M | 5% | SUMM-RE, LinTO, Réunions de travail (ORFEO) | | Debates | 402k | <2% | FreD, ESLO | | Assistance | 159k | <1% | Fleuron (ORFEO), Accueil UBS, OTG, ESLO | | Presentation, Formal Address | 86k | <0.5% | Valibel (ORFEO), LinTO, ESLO | Training data was augmented with the following techniques: * varying the format used to indicate speech turns (dashes or [XXX:]) * substituting [Intervenant X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name * removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems) Long conversations were truncated at a maximum of 2048 tokens. Where possible, they were split between speaker turns. While the model has been trained and evaluated only on French dialogues, it may be able to generate conversations in other languages from the original Falcon-7b training data. ### Training Procedure Claire-7B-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). See [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for more details. Claire-7B-0.1 was trained on 1 A100 80GB GPU for about 50 GPU hours. Hyperparameters were the following: | **Hyperparameter** | **Value** | |--------------------|------------| | Precision | `bfloat16` | | Optimizer | AdamW | | Learning rate | 1e-4 | | Weight decay | 1e-2 | | Batch size | 132 | | LoRA rank | 16 | | LoRA alpha | 32 | | Dropout | 0.05 | | gradient clipping | 1 | ## Evaluation To evaluate Claire-7B-0.1’s ability to generate natural sounding, French conversations, we compared its responses to a variety of prompts with those of three other models: * [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b), * [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1) (a version of Mistral-7B-v0.1 adapted in the same fashion as Claire-7B-0.1) We tested an even mixture of monologue and dialogue-style prompts. Each of the four generated responses was evaluated along three dimensions: Interaction, Fluency and Relevance. Evaluators were also asked to rank the four responses by preference. Our results confirm that continual pre-training of Falcon-7b and Mistral-7B-v0.1 leads to improvement (relative to the base models) along all three evaluation dimensions and that Claire-7B-0.1 outperforms the adapted Mistral counterpart in the Fluency and Relevance categories (and in the Interaction category if we focus on dialogue-style prompts). Ranking results also reveal a clear subjective preference for Claire-7B-0.1, as shown in the following table: <!--| | **Claire-Falcon** | **Claire-Mistral** | **Falcon** | **Mistral** | --> | | <span style="font-weight: normal">... over</span><br /> **Claire-Falcon** | <span style="font-weight: normal">... over</span><br /> **Claire-Mistral** | <span style="font-weight: normal">... over</span><br /> **Falcon** | <span style="font-weight: normal">... over</span><br /> **Mistral** | |--------------------------------------|----------------------|-----------------------|---------------|---------------------| | prefer<br /> **Claire-Falcon** ... | | **62.2%** | **63.9%** | **83.8%** | | prefer<br /> **Claire-Mistral** ... | _34.8%_ | | **56.2%** | **75.3%** | | prefer<br /> **Falcon** ... | _36.1%_ | _43.8%_ | | **81.4%** | | prefer<br /> **Mistral** ... | _16.2%_ | _24.7%_ | _18.6%_ | | (In this table, "Claire-Falcon" stands for Claire-7B-0.1, "Falcon", for [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b), "Mistral", for [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and "Claire-Mistral", for [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1).) Please note that the model can generate disfluencies and humorous responses as a result of its training on spoken and theatrical text. More evaluation details will be provided in a separate publication. ## License Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses, Claire-7B-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/). You can find a variant of this model published under the Apache 2.0 license at [OpenLLM-France/Claire-7B-Apache-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-Apache-0.1). ## Acknowledgements This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561). Claire-7B-0.1 was created by members of [LINAGORA](https://labs.linagora.com/) (in alphabetical order): Ismaïl Harrando, Julie Hunter, Jean-Pierre Lorré, Jérôme Louradour, Michel-Marie Maudet, Virgile Rennard, Guokan Shang. Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice. ## Contact [email protected]
null
Non_BioNLP
<!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Claire 7B 0.1 - GPTQ - Model creator: [OpenLLM France](https://huggingface.co/OpenLLM-France) - Original model: [Claire 7B 0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1) <!-- description start --> # Description This repo contains GPTQ model files for [OpenLLM France's Claire 7B 0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Claire-7B-0.1-GGUF) * [OpenLLM France's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenLLM-France/Claire-7B-0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: OpenLLM-France ``` - Bonjour BotName, {prompt} - Bonjour UserName, ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.04 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.43 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.99 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.96 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.94 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [french](https://huggingface.co/datasets/Kant1/French_Wikipedia_articles/viewer/) | 2048 | 4.17 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Claire-7B-0.1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Claire-7B-0.1-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Claire-7B-0.1-GPTQ`: ```shell mkdir Claire-7B-0.1-GPTQ huggingface-cli download TheBloke/Claire-7B-0.1-GPTQ --local-dir Claire-7B-0.1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Claire-7B-0.1-GPTQ huggingface-cli download TheBloke/Claire-7B-0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Claire-7B-0.1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Claire-7B-0.1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Claire-7B-0.1-GPTQ --local-dir Claire-7B-0.1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Claire-7B-0.1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Claire-7B-0.1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Claire-7B-0.1-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Claire-7B-0.1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Claire-7B-0.1-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''- Bonjour BotName, {prompt} - Bonjour UserName, ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Claire-7B-0.1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''- Bonjour BotName, {prompt} - Bonjour UserName, ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: OpenLLM France's Claire 7B 0.1 # Claire-7B-0.1 **Claire-7B-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) and [OpenLLM-France](https://github.com/OpenLLM-France)** **adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on French conversational data.** Claire-7B-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language. ## Typical usage ```python import transformers import torch model_name = "OpenLLM-France/Claire-7B-0.1" tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) model = transformers.AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16, load_in_4bit=True # For efficient inference, if supported by the GPU card ) pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) generation_kwargs = dict( num_return_sequences=1, # Number of variants to generate. return_full_text= False, # Do not include the prompt in the generated text. max_new_tokens=200, # Maximum length for the output text. do_sample=True, top_k=10, temperature=1.0, # Sampling parameters. pad_token_id=tokenizer.eos_token_id, # Just to avoid a harmless warning. ) prompt = """\ - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? - Bonjour Camille,\ """ completions = pipeline(prompt, **generation_kwargs) for completion in completions: print(prompt + " […]" + completion['generated_text']) ``` This will print something like: ``` - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? - Bonjour Camille, […] je vous prépare un plat de saison, une daube provençale. - Ah je ne connais pas cette recette. - C'est très facile à préparer, vous n'avez qu'à mettre de l'eau dans une marmite, y mettre de l'oignon émincé, des carottes coupées en petits morceaux, et vous allez mettre votre viande de bœuf coupé en petits morceaux également. - Je n'ai jamais cuisiné de viande de bœuf, mais c'est vrai que ça a l'air bien facile. - Vous n'avez plus qu'à laisser mijoter, et ensuite il sera temps de servir les clients. - Très bien. ``` You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization). If you have trouble running this code, make sure you have recent versions of `torch`, `transformers` and `accelerate` (see [requirements.txt](requirements.txt)). ### Typical prompts Claire-7B-0.1 was trained on diarized French conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows: A monologue can be specified as a single line prompt (though keep in mind that Claire might still return a dialogue because of its training): ```python prompt = "Mesdames et messieurs les députés, chers collègues, bonsoir. Vous l'aurez peut-être remarqué, je cite rarement" ``` A dialogue between two speakers can be specified with one line per speech turn starting with a dash: ```python prompt = """\ - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? - Bonjour Camille,\ """ ``` A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Intervenant X:]` where `X` is a number: ```python prompt = """\ [Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? [Intervenant 2:] Bonjour Camille,\ """ ``` A dialogue or multilogue with named speakers can be specified with lines that start with `[SpeakerName:]` where `SpeakerName` can be a first name, a first and a last name, a nickname, a title… ```python prompt = """\ [Mme Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ? [Mr. Dominique Petit:] Bonjour Camille,\ """ ``` ## Training Details ### Training Data Claire-7B-0.1 was tuned from Falcon-7b on the following data distribution: | **Data type** | **Words** | **Training Sampling Weight** | **Sources** | |-------------------------------|------------|------------------------------|-----------------------------------------------------| | Parliamentary Proceedings | 135M | 35% | assemblee-nationale.fr | | Theatre | 16M | 18% | theatre-classique.fr, theatregratuit.com | | Interviews | 6.4M | 29% | TCOF, CFPP, CFPB, ACSYNT, PFC, Valibel (ORFEO), ESLO | | Free Conversations | 2.2M | 10% | CRFP, OFROM, CID, Rhapsodie, ParisStories, PFC, CLAPI, C-ORAL-ROM (ORFEO), LinTO, ESLO | | Meetings | 1.2M | 5% | SUMM-RE, LinTO, Réunions de travail (ORFEO) | | Debates | 402k | <2% | FreD, ESLO | | Assistance | 159k | <1% | Fleuron (ORFEO), Accueil UBS, OTG, ESLO | | Presentation, Formal Address | 86k | <0.5% | Valibel (ORFEO), LinTO, ESLO | Training data was augmented with the following techniques: * varying the format used to indicate speech turns (dashes or [XXX:]) * substituting [Intervenant X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name * removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems) Long conversations were truncated at a maximum of 2048 tokens. Where possible, they were split between speaker turns. While the model has been trained and evaluated only on French dialogues, it may be able to generate conversations in other languages from the original Falcon-7b training data. ### Training Procedure Claire-7B-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). See [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for more details. Claire-7B-0.1 was trained on 1 A100 80GB GPU for about 50 GPU hours. Hyperparameters were the following: | **Hyperparameter** | **Value** | |--------------------|------------| | Precision | `bfloat16` | | Optimizer | AdamW | | Learning rate | 1e-4 | | Weight decay | 1e-2 | | Batch size | 132 | | LoRA rank | 16 | | LoRA alpha | 32 | | Dropout | 0.05 | | gradient clipping | 1 | ## Evaluation To evaluate Claire-7B-0.1’s ability to generate natural sounding, French conversations, we compared its responses to a variety of prompts with those of three other models: * [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b), * [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1) (a version of Mistral-7B-v0.1 adapted in the same fashion as Claire-7B-0.1) We tested an even mixture of monologue and dialogue-style prompts. Each of the four generated responses was evaluated along three dimensions: Interaction, Fluency and Relevance. Evaluators were also asked to rank the four responses by preference. Our results confirm that continual pre-training of Falcon-7b and Mistral-7B-v0.1 leads to improvement (relative to the base models) along all three evaluation dimensions and that Claire-7B-0.1 outperforms the adapted Mistral counterpart in the Fluency and Relevance categories (and in the Interaction category if we focus on dialogue-style prompts). Ranking results also reveal a clear subjective preference for Claire-7B-0.1, as shown in the following table: <!--| | **Claire-Falcon** | **Claire-Mistral** | **Falcon** | **Mistral** | --> | | <span style="font-weight: normal">... over</span><br /> **Claire-Falcon** | <span style="font-weight: normal">... over</span><br /> **Claire-Mistral** | <span style="font-weight: normal">... over</span><br /> **Falcon** | <span style="font-weight: normal">... over</span><br /> **Mistral** | |--------------------------------------|----------------------|-----------------------|---------------|---------------------| | prefer<br /> **Claire-Falcon** ... | | **62.2%** | **63.9%** | **83.8%** | | prefer<br /> **Claire-Mistral** ... | _34.8%_ | | **56.2%** | **75.3%** | | prefer<br /> **Falcon** ... | _36.1%_ | _43.8%_ | | **81.4%** | | prefer<br /> **Mistral** ... | _16.2%_ | _24.7%_ | _18.6%_ | | (In this table, "Claire-Falcon" stands for Claire-7B-0.1, "Falcon", for [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b), "Mistral", for [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and "Claire-Mistral", for [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1).) Please note that the model can generate disfluencies and humorous responses as a result of its training on spoken and theatrical text. More evaluation details will be provided in a separate publication. ## License Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses, Claire-7B-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/). You can find a variant of this model published under the Apache 2.0 license at [OpenLLM-France/Claire-7B-Apache-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-Apache-0.1). ## Acknowledgements This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561). Claire-7B-0.1 was created by members of [LINAGORA](https://labs.linagora.com/) (in alphabetical order): Ismaïl Harrando, Julie Hunter, Jean-Pierre Lorré, Jérôme Louradour, Michel-Marie Maudet, Virgile Rennard, Guokan Shang. Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice. ## Contact [email protected]
{"base_model": "OpenLLM-France/Claire-7B-0.1", "language": ["fr"], "license": "cc-by-nc-sa-4.0", "model_name": "Claire 7B 0.1", "pipeline_tag": "text-generation", "tags": ["pretrained", "conversational"], "inference": false, "model_creator": "OpenLLM France", "model_type": "falcon", "prompt_template": "- Bonjour BotName, {prompt}\n- Bonjour UserName,\n", "quantized_by": "TheBloke", "widget": [{"example_title": "Request for a recipe", "group": "Dash", "text": "- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?\n- Bonjour Camille,"}, {"example_title": "Request for a recipe", "group": "Intervenant", "text": "[Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?\n[Intervenant 2:] Bonjour Camille,"}, {"example_title": "Request for a recipe", "group": "FirstName", "text": "[Camille:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?\n[Dominique:] Bonjour Camille,"}, {"example_title": "Request for a recipe", "group": "Named", "text": "[Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?\n[Dominique Petit:] Bonjour Camille,"}]}
task
[ "SUMMARIZATION" ]
42,974
shwan/readme_test
shwan
summarization
[ "korean", "klue", "summarization", "ko", "dataset:c4", "license:apache-2.0", "region:us" ]
2022-08-24T06:49:32Z
2022-08-24T06:57:15+00:00
0
0
--- datasets: - c4 language: ko license: apache-2.0 tags: - korean - klue - summarization --- # KoMiniLM 🐣 Korean mini language model ## Overview Current language models usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and online serving in real-life applications due to latency and capacity constraints. In this project, we release a light weight korean language model to address the aforementioned shortcomings of existing language models. ## Quick tour ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("BM-K/KoMiniLM") # 23M model model = AutoModel.from_pretrained("BM-K/KoMiniLM") inputs = tokenizer("안녕 세상아!", return_tensors="pt") outputs = model(**inputs) ``` ## Update history ** Updates on 2022.06.20 ** - Release KoMiniLM-bert-68M ** Updates on 2022.05.24 ** - Release KoMiniLM-bert-23M ## Pre-training `Teacher Model`: [KLUE-BERT(base)](https://github.com/KLUE-benchmark/KLUE) ### Object Self-Attention Distribution and Self-Attention Value-Relation [[Wang et al., 2020]] were distilled from each discrete layer of the teacher model to the student model. Wang et al. distilled in the last layer of the transformer, but that was not the case in this project. ### Data sets |Data|News comments|News article| |:----:|:----:|:----:| |size|10G|10G| ### Config - **KoMiniLM-23M** ```json { "architectures": [ "BartForPreTraining" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 384, "initializer_range": 0.02, "intermediate_size": 1536, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bart", "num_attention_heads": 12, "num_hidden_layers": 6, "output_attentions": true, "pad_token_id": 0, "position_embedding_type": "absolute", "return_dict": false, "torch_dtype": "float32", "transformers_version": "4.13.0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 32000 } ``` ### Performance on subtasks - The results of our fine-tuning experiments are an average of 3 runs for each task. ``` cd KoMiniLM-Finetune bash scripts/run_all_kominilm.sh ``` || #Param | Average | NSMC<br>(Acc) | Naver NER<br>(F1) | PAWS<br>(Acc) | KorNLI<br>(Acc) | KorSTS<br>(Spearman) | Question Pair<br>(Acc) | KorQuaD<br>(Dev)<br>(EM/F1) | |:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |KoBERT(KLUE)| 110M | 86.84 | 90.20±0.07 | 87.11±0.05 | 81.36±0.21 | 81.06±0.33 | 82.47±0.14 | 95.03±0.44 | 84.43±0.18 / <br>93.05±0.04 | |KcBERT| 108M | 78.94 | 89.60±0.10 | 84.34±0.13 | 67.02±0.42| 74.17±0.52 | 76.57±0.51 | 93.97±0.27 | 60.87±0.27 / <br>85.01±0.14 | |KoBERT(SKT)| 92M | 79.73 | 89.28±0.42 | 87.54±0.04 | 80.93±0.91 | 78.18±0.45 | 75.98±2.81 | 94.37±0.31 | 51.94±0.60 / <br>79.69±0.66 | |DistilKoBERT| 28M | 74.73 | 88.39±0.08 | 84.22±0.01 | 61.74±0.45 | 70.22±0.14 | 72.11±0.27 | 92.65±0.16 | 52.52±0.48 / <br>76.00±0.71 | | | | | | | | | | | |**KoMiniLM<sup>†</sup>**| **68M** | 85.90 | 89.84±0.02 | 85.98±0.09 | 80.78±0.30 | 79.28±0.17 | 81.00±0.07 | 94.89±0.37 | 83.27±0.08 / <br>92.08±0.06 | |**KoMiniLM<sup>†</sup>**| **23M** | 84.79 | 89.67±0.03 | 84.79±0.09 | 78.67±0.45 | 78.10±0.07 | 78.90±0.11 | 94.81±0.12 | 82.11±0.42 / <br>91.21±0.29 | - [NSMC](https://github.com/e9t/nsmc) (Naver Sentiment Movie Corpus) - [Naver NER](https://github.com/naver/nlp-challenge) (NER task on Naver NLP Challenge 2018) - [PAWS](https://github.com/google-research-datasets/paws) (Korean Paraphrase Adversaries from Word Scrambling) - [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) (Korean Natural Language Understanding) - [Question Pair](https://github.com/songys/Question_pair) (Paired Question) - [KorQuAD](https://korquad.github.io/) (The Korean Question Answering Dataset) <img src = "https://user-images.githubusercontent.com/55969260/174229747-279122dc-9d27-4da9-a6e7-f9f1fe1651f7.png"> <br> ### User Contributed Examples - ## Reference - [KLUE BERT](https://github.com/KLUE-benchmark/KLUE) - [KcBERT](https://github.com/Beomi/KcBERT) - [SKT KoBERT](https://github.com/SKTBrain/KoBERT) - [DistilKoBERT](https://github.com/monologg/DistilKoBERT) - [lassl](https://github.com/lassl/lassl)
null
Non_BioNLP
# KoMiniLM 🐣 Korean mini language model ## Overview Current language models usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and online serving in real-life applications due to latency and capacity constraints. In this project, we release a light weight korean language model to address the aforementioned shortcomings of existing language models. ## Quick tour ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("BM-K/KoMiniLM") # 23M model model = AutoModel.from_pretrained("BM-K/KoMiniLM") inputs = tokenizer("안녕 세상아!", return_tensors="pt") outputs = model(**inputs) ``` ## Update history ** Updates on 2022.06.20 ** - Release KoMiniLM-bert-68M ** Updates on 2022.05.24 ** - Release KoMiniLM-bert-23M ## Pre-training `Teacher Model`: [KLUE-BERT(base)](https://github.com/KLUE-benchmark/KLUE) ### Object Self-Attention Distribution and Self-Attention Value-Relation [[Wang et al., 2020]] were distilled from each discrete layer of the teacher model to the student model. Wang et al. distilled in the last layer of the transformer, but that was not the case in this project. ### Data sets |Data|News comments|News article| |:----:|:----:|:----:| |size|10G|10G| ### Config - **KoMiniLM-23M** ```json { "architectures": [ "BartForPreTraining" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 384, "initializer_range": 0.02, "intermediate_size": 1536, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bart", "num_attention_heads": 12, "num_hidden_layers": 6, "output_attentions": true, "pad_token_id": 0, "position_embedding_type": "absolute", "return_dict": false, "torch_dtype": "float32", "transformers_version": "4.13.0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 32000 } ``` ### Performance on subtasks - The results of our fine-tuning experiments are an average of 3 runs for each task. ``` cd KoMiniLM-Finetune bash scripts/run_all_kominilm.sh ``` || #Param | Average | NSMC<br>(Acc) | Naver NER<br>(F1) | PAWS<br>(Acc) | KorNLI<br>(Acc) | KorSTS<br>(Spearman) | Question Pair<br>(Acc) | KorQuaD<br>(Dev)<br>(EM/F1) | |:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |KoBERT(KLUE)| 110M | 86.84 | 90.20±0.07 | 87.11±0.05 | 81.36±0.21 | 81.06±0.33 | 82.47±0.14 | 95.03±0.44 | 84.43±0.18 / <br>93.05±0.04 | |KcBERT| 108M | 78.94 | 89.60±0.10 | 84.34±0.13 | 67.02±0.42| 74.17±0.52 | 76.57±0.51 | 93.97±0.27 | 60.87±0.27 / <br>85.01±0.14 | |KoBERT(SKT)| 92M | 79.73 | 89.28±0.42 | 87.54±0.04 | 80.93±0.91 | 78.18±0.45 | 75.98±2.81 | 94.37±0.31 | 51.94±0.60 / <br>79.69±0.66 | |DistilKoBERT| 28M | 74.73 | 88.39±0.08 | 84.22±0.01 | 61.74±0.45 | 70.22±0.14 | 72.11±0.27 | 92.65±0.16 | 52.52±0.48 / <br>76.00±0.71 | | | | | | | | | | | |**KoMiniLM<sup>†</sup>**| **68M** | 85.90 | 89.84±0.02 | 85.98±0.09 | 80.78±0.30 | 79.28±0.17 | 81.00±0.07 | 94.89±0.37 | 83.27±0.08 / <br>92.08±0.06 | |**KoMiniLM<sup>†</sup>**| **23M** | 84.79 | 89.67±0.03 | 84.79±0.09 | 78.67±0.45 | 78.10±0.07 | 78.90±0.11 | 94.81±0.12 | 82.11±0.42 / <br>91.21±0.29 | - [NSMC](https://github.com/e9t/nsmc) (Naver Sentiment Movie Corpus) - [Naver NER](https://github.com/naver/nlp-challenge) (NER task on Naver NLP Challenge 2018) - [PAWS](https://github.com/google-research-datasets/paws) (Korean Paraphrase Adversaries from Word Scrambling) - [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) (Korean Natural Language Understanding) - [Question Pair](https://github.com/songys/Question_pair) (Paired Question) - [KorQuAD](https://korquad.github.io/) (The Korean Question Answering Dataset) <img src = "https://user-images.githubusercontent.com/55969260/174229747-279122dc-9d27-4da9-a6e7-f9f1fe1651f7.png"> <br> ### User Contributed Examples - ## Reference - [KLUE BERT](https://github.com/KLUE-benchmark/KLUE) - [KcBERT](https://github.com/Beomi/KcBERT) - [SKT KoBERT](https://github.com/SKTBrain/KoBERT) - [DistilKoBERT](https://github.com/monologg/DistilKoBERT) - [lassl](https://github.com/lassl/lassl)
{"datasets": ["c4"], "language": "ko", "license": "apache-2.0", "tags": ["korean", "klue", "summarization"]}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
42,975
michellehbn/brrrr
michellehbn
translation
[ "transformers", "pytorch", "tf", "jax", "rust", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-01-31T13:55:18Z
2023-01-31T14:22:57+00:00
4
1
--- datasets: - c4 language: - en - fr - ro - de - multilingual license: apache-2.0 tags: - summarization - translation --- # Model Card for T5 Small ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citation](#citation) 8. [Model Card Authors](#model-card-authors) 9. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html): > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Small is the checkpoint with 60 million parameters. - **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints) - **Model type:** Language model - **Language(s) (NLP):** English, French, Romanian, German - **License:** Apache 2.0 - **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5) - **Resources for more information:** - [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer) - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5) # Uses ## Direct Use and Downstream Use The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model: > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations More information needed. ## Recommendations More information needed. # Training Details ## Training Data The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5. The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**. Thereby, the following datasets were being used for (1.) and (2.): 1. **Datasets used for Unsupervised denoising objective**: - [C4](https://huggingface.co/datasets/c4) - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr) 2. **Datasets used for Supervised text-to-text language modeling objective** - Sentence acceptability judgment - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471) - Sentiment analysis - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - Paraphrasing/sentence similarity - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002) - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055) - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - Natural language inference - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426) - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250) - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9) - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf) - Sentence completion - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning) - Word sense disambiguation - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121) - Question answering - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023) - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885) - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044) ## Training Procedure In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write: > In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. # Evaluation ## Testing Data, Factors & Metrics The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details. ## Results For full results for T5-small, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @article{2020t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {140}, pages = {1-67}, url = {http://jmlr.org/papers/v21/20-074.html} } ``` **APA:** - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5Model.from_pretrained("t5-small") input_ids = tokenizer( "Studies have been shown that owning a dog is good for you", return_tensors="pt" ).input_ids # Batch size 1 decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1 # forward pass outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) last_hidden_states = outputs.last_hidden_state ``` See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples. </details>
null
Non_BioNLP
# Model Card for T5 Small ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citation](#citation) 8. [Model Card Authors](#model-card-authors) 9. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html): > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Small is the checkpoint with 60 million parameters. - **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints) - **Model type:** Language model - **Language(s) (NLP):** English, French, Romanian, German - **License:** Apache 2.0 - **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5) - **Resources for more information:** - [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer) - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5) # Uses ## Direct Use and Downstream Use The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model: > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations More information needed. ## Recommendations More information needed. # Training Details ## Training Data The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5. The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**. Thereby, the following datasets were being used for (1.) and (2.): 1. **Datasets used for Unsupervised denoising objective**: - [C4](https://huggingface.co/datasets/c4) - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr) 2. **Datasets used for Supervised text-to-text language modeling objective** - Sentence acceptability judgment - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471) - Sentiment analysis - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - Paraphrasing/sentence similarity - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002) - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055) - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - Natural language inference - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426) - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250) - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9) - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf) - Sentence completion - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning) - Word sense disambiguation - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121) - Question answering - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023) - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885) - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044) ## Training Procedure In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write: > In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. # Evaluation ## Testing Data, Factors & Metrics The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details. ## Results For full results for T5-small, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @article{2020t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {140}, pages = {1-67}, url = {http://jmlr.org/papers/v21/20-074.html} } ``` **APA:** - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5Model.from_pretrained("t5-small") input_ids = tokenizer( "Studies have been shown that owning a dog is good for you", return_tensors="pt" ).input_ids # Batch size 1 decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1 # forward pass outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) last_hidden_states = outputs.last_hidden_state ``` See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples. </details>
{"datasets": ["c4"], "language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"]}
task
[ "QUESTION_ANSWERING", "TRANSLATION", "SUMMARIZATION", "PARAPHRASING" ]
42,976
ryfye181/hittite_saved_model
ryfye181
text2text-generation
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-03-13T02:58:09Z
2024-03-24T16:25:12+00:00
8
1
--- {} --- # Ancient Language Translation with Meta AI's No Language Left Behind Model (NLLB). **GitHub Link:** https://github.com/rfeinberg3/Hittite_English_Translation_w-NLLB ## Abstract Hittite is one of the oldest written languages, spoken by the ancient Hittites with records dating as far back as the 17th century B.C.E. in what is now modern-day Turkey. However, the language died out around the 13th century B.C.E and relatively few records of the language have been uncovered, making Hittite a low data language. The issue is training a language model to translate written Hittite into written English. A difficult task for two main reasons: as mentioned, there is a data scarcity when it comes to labeled Hittite to English translations. Possibly the bigger issue though is the lack of language models that support fine-tuning for new languages. ## Project Overview This project aims to bridge the gap between the ancient and the modern world by translating the Hittite language into English. At the core of this endeavor lies the utilization of cutting-edge Natural Language Processing (NLP) and machine learning techniques, leveraging a transformer-based model open to the community for advancements and contributions. ### Key Features - **Transformer-Based Model Translation:** Employs a state-of-the-art transformer-based model to understand and translate the Hittite language. - **Custom Supervised Dataset:** Through meticulous data scraping and the development of a dataset builder tool, this project has curated a specialized dataset. This dataset features pairs of English and Hittite translations, tailored to train the translation model effectively. - **Google Colab Integration:** The project is accessible via a Google Colab notebook for ease of use and accessibility. This notebook guides users through the process of tokenization, model fine-tuning, and evaluation, providing an interactive platform for exploring ancient Hittite translations. Hittite To English colab: [https://colab.research.google.com/drive/1fmJe9EuumIo-uwfW4Pp3hgyz3SviomaQ?usp=sharing](https://colab.research.google.com/drive/1fmJe9EuumIo-uwfW4Pp3hgyz3SviomaQ?usp=sharing) - **Performance Metrics:** To ensure the translation model's accuracy and reliability, comprehensive metrics are collected and analyzed. More details can be found in the report document HitToEng_Report.pdf. The implementation at nllb_hittite_to_english_finetune.ipynb. ## Usage **!Must run on a GPU! CPU usage is not supported!** **Load model and tokenizer from Huggingface:** - $ model_load_name = "ryfye181/hittite_saved_model" - $ model = AutoModelForSeq2SeqLM.from_pretrained(model_load_name).cuda() - $ tokenizer = NllbTokenizer.from_pretrained(model_load_name). Using the Model for translating is demonostrated in section 8 of the Google Colab notebook. ## Metrics ### Loss over Time During Training ![image](https://github.com/rfeinberg3/Hittite_English_Translation_w-NLLB/assets/95943957/b2101ba5-36f3-4d9a-a3bf-bad2a0d06471) ### CHRF2++ Score ![image](https://github.com/rfeinberg3/Hittite_English_Translation_w-NLLB/assets/95943957/1b3e6bdf-932d-4a8c-ab49-223bde3be381) **https://github.com/mjpost/sacrebleu#chrf--chrf** ## References Hittite Base Form Dictionary: * https://lrc.la.utexas.edu/eieol_base_form_dictionary/hitol/11 Hittite Lexicons: * https://www.assyrianlanguages.org/hittite/en_lexique_hittite.htm#l * https://hittitetexts.com/en (where we get eCMD from) No Language Left Behind GitHub: * https://github.com/facebookresearch/fairseq/tree/nllb * https://huggingface.co/facebook/nllb-200-1.3B <--- Their Model on HuggingFace: * https://github.com/facebookresearch/fairseq/blob/nllb/examples/nllb/modeling/README.md <--- Info on fine-tuning their model: NLLB New Language Fine-Tuning Original Example: * https://cointegrated.medium.com/how-to-fine-tune-a-nllb-200-model-for-translating-a-new-language-a37fc706b865 * https://colab.research.google.com/drive/1bayEaw2fz_9Mhg9jFFZhrmDlQlBj1YZf?usp=sharing <--- Their original colab
null
Non_BioNLP
# Ancient Language Translation with Meta AI's No Language Left Behind Model (NLLB). **GitHub Link:** https://github.com/rfeinberg3/Hittite_English_Translation_w-NLLB ## Abstract Hittite is one of the oldest written languages, spoken by the ancient Hittites with records dating as far back as the 17th century B.C.E. in what is now modern-day Turkey. However, the language died out around the 13th century B.C.E and relatively few records of the language have been uncovered, making Hittite a low data language. The issue is training a language model to translate written Hittite into written English. A difficult task for two main reasons: as mentioned, there is a data scarcity when it comes to labeled Hittite to English translations. Possibly the bigger issue though is the lack of language models that support fine-tuning for new languages. ## Project Overview This project aims to bridge the gap between the ancient and the modern world by translating the Hittite language into English. At the core of this endeavor lies the utilization of cutting-edge Natural Language Processing (NLP) and machine learning techniques, leveraging a transformer-based model open to the community for advancements and contributions. ### Key Features - **Transformer-Based Model Translation:** Employs a state-of-the-art transformer-based model to understand and translate the Hittite language. - **Custom Supervised Dataset:** Through meticulous data scraping and the development of a dataset builder tool, this project has curated a specialized dataset. This dataset features pairs of English and Hittite translations, tailored to train the translation model effectively. - **Google Colab Integration:** The project is accessible via a Google Colab notebook for ease of use and accessibility. This notebook guides users through the process of tokenization, model fine-tuning, and evaluation, providing an interactive platform for exploring ancient Hittite translations. Hittite To English colab: [https://colab.research.google.com/drive/1fmJe9EuumIo-uwfW4Pp3hgyz3SviomaQ?usp=sharing](https://colab.research.google.com/drive/1fmJe9EuumIo-uwfW4Pp3hgyz3SviomaQ?usp=sharing) - **Performance Metrics:** To ensure the translation model's accuracy and reliability, comprehensive metrics are collected and analyzed. More details can be found in the report document HitToEng_Report.pdf. The implementation at nllb_hittite_to_english_finetune.ipynb. ## Usage **!Must run on a GPU! CPU usage is not supported!** **Load model and tokenizer from Huggingface:** - $ model_load_name = "ryfye181/hittite_saved_model" - $ model = AutoModelForSeq2SeqLM.from_pretrained(model_load_name).cuda() - $ tokenizer = NllbTokenizer.from_pretrained(model_load_name). Using the Model for translating is demonostrated in section 8 of the Google Colab notebook. ## Metrics ### Loss over Time During Training ![image](https://github.com/rfeinberg3/Hittite_English_Translation_w-NLLB/assets/95943957/b2101ba5-36f3-4d9a-a3bf-bad2a0d06471) ### CHRF2++ Score ![image](https://github.com/rfeinberg3/Hittite_English_Translation_w-NLLB/assets/95943957/1b3e6bdf-932d-4a8c-ab49-223bde3be381) **https://github.com/mjpost/sacrebleu#chrf--chrf** ## References Hittite Base Form Dictionary: * https://lrc.la.utexas.edu/eieol_base_form_dictionary/hitol/11 Hittite Lexicons: * https://www.assyrianlanguages.org/hittite/en_lexique_hittite.htm#l * https://hittitetexts.com/en (where we get eCMD from) No Language Left Behind GitHub: * https://github.com/facebookresearch/fairseq/tree/nllb * https://huggingface.co/facebook/nllb-200-1.3B <--- Their Model on HuggingFace: * https://github.com/facebookresearch/fairseq/blob/nllb/examples/nllb/modeling/README.md <--- Info on fine-tuning their model: NLLB New Language Fine-Tuning Original Example: * https://cointegrated.medium.com/how-to-fine-tune-a-nllb-200-model-for-translating-a-new-language-a37fc706b865 * https://colab.research.google.com/drive/1bayEaw2fz_9Mhg9jFFZhrmDlQlBj1YZf?usp=sharing <--- Their original colab
{}
task
[ "TRANSLATION" ]
42,977
LocalDoc/mbart_large_qa_azerbaijan
LocalDoc
question-answering
[ "transformers", "safetensors", "mbart", "text2text-generation", "mBART Large for Question Answering in Azerbaijani", "question-answering", "az", "dataset:LocalDoc/LDQuAd", "doi:10.57967/hf/2542", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-13T13:03:09Z
2024-06-13T13:50:43+00:00
22
0
--- datasets: - LocalDoc/LDQuAd language: - az library_name: transformers license: cc-by-nc-4.0 pipeline_tag: question-answering tags: - mBART Large for Question Answering in Azerbaijani --- # mBART Large for Question Answering in Azerbaijani This repository contains a fine-tuned mBART Large model for question answering in Azerbaijani. The model was trained on a custom dataset and can be used to answer questions based on a given context. ## Model Details **Base Model:** `facebook/mbart-large-50` **Language:** Azerbaijani (az_AZ) ## Training Details The model was trained for 3 epochs with the following training and validation loss: | Epoch | Training Loss | Validation Loss | |-------|----------------|-----------------| | 1 | 0.015100 | 0.014827 | | 2 | 0.009800 | 0.013616 | | 3 | 0.004100 | 0.015270 | ## Usage To use this model, you need to install the `transformers` library from Hugging Face. You can then load the model and tokenizer and use them to answer questions based on a given context. ### Installation ```bash pip install transformers ``` ```python from transformers import MBartForConditionalGeneration, MBart50Tokenizer # Load the model and tokenizer model_name = "LocalDoc/mbart_large_qa_azerbaijan" tokenizer = MBart50Tokenizer.from_pretrained(model_name, src_lang="en_XX", tgt_lang="az_AZ") model = MBartForConditionalGeneration.from_pretrained(model_name) def answer_question(context, question): # Prepare input text input_text = f"context: {context} question: {question}" inputs = tokenizer(input_text, return_tensors="pt", max_length=512, truncation=True, padding="max_length") # Generate answer outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_length=128, num_beams=5, early_stopping=True ) # Decode the answer answer = tokenizer.decode(outputs[0], skip_special_tokens=True) return answer # Example usage context = """Ey iman gətirənlər! Səbir edərək və namaz qılaraq Allahdan kömək istəyin! Şübhəsiz ki, Allah səbir edənlərlədir. Ey iman gətirənlər, əgər yalnız Allaha ibadət edirsinizsə, sizə verdiyimiz ruzilərin təmiz olanlarından yeyin və Ona şükür edin! Ey iman gətirənlər! Oruc tutmaq sizdən əvvəlkilərə fərz buyurulduğu kimi sizə də fərz buyuruldu ki, bəlkə, günahlardan çəkinəsiniz! Ey iman gətirənlər, bütövlüklə İslama daxil olun! Şeytanın addımlarını izləməyin! Şübhəsiz ki, o sizin açıq-aydın düşməninizdir. Ey iman gətirənlər! Həqiqətən, içki, qumar, bütlər və fal oxları şeytan əməli olan murdar şeylərdir. Bunlardan uzaq durun ki, bəlkə nicat tapasınız! Ey iman gətirənlər! “Allah yolunda döyüşə çıxın!” - deyildiyi zaman sizə nə oldu ki, yerə yapışıb qaldınız. Axirətin əvəzinə dünya həyatına razı oldunuz? Dünya malı axirət nemətlərinin yanında çox az bir şeydir. Həqiqətən, insan çox zalım və çox nankordur.""" question = "Allah kiminlədir ?" answer = answer_question(context, question) print(answer) ``` ## License This model licensed under the CC BY-NC-ND 4.0 license. What does this license allow? Attribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made. Non-Commercial: You may not use the material for commercial purposes. No Derivatives: If you remix, transform, or build upon the material, you may not distribute the modified material. For more information, please refer to the <a target="_blank" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND 4.0 license</a>. ## Citation ``` @misc {localdoc_2024, author = { {LocalDoc} }, title = { mbart_large_qa_azerbaijan (Revision 7484a07) }, year = 2024, url = { https://huggingface.co/LocalDoc/mbart_large_qa_azerbaijan }, doi = { 10.57967/hf/2541 }, publisher = { Hugging Face } } ``` ## Contact For more information, questions, or issues, please contact LocalDoc at [[email protected]].
null
Non_BioNLP
# mBART Large for Question Answering in Azerbaijani This repository contains a fine-tuned mBART Large model for question answering in Azerbaijani. The model was trained on a custom dataset and can be used to answer questions based on a given context. ## Model Details **Base Model:** `facebook/mbart-large-50` **Language:** Azerbaijani (az_AZ) ## Training Details The model was trained for 3 epochs with the following training and validation loss: | Epoch | Training Loss | Validation Loss | |-------|----------------|-----------------| | 1 | 0.015100 | 0.014827 | | 2 | 0.009800 | 0.013616 | | 3 | 0.004100 | 0.015270 | ## Usage To use this model, you need to install the `transformers` library from Hugging Face. You can then load the model and tokenizer and use them to answer questions based on a given context. ### Installation ```bash pip install transformers ``` ```python from transformers import MBartForConditionalGeneration, MBart50Tokenizer # Load the model and tokenizer model_name = "LocalDoc/mbart_large_qa_azerbaijan" tokenizer = MBart50Tokenizer.from_pretrained(model_name, src_lang="en_XX", tgt_lang="az_AZ") model = MBartForConditionalGeneration.from_pretrained(model_name) def answer_question(context, question): # Prepare input text input_text = f"context: {context} question: {question}" inputs = tokenizer(input_text, return_tensors="pt", max_length=512, truncation=True, padding="max_length") # Generate answer outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_length=128, num_beams=5, early_stopping=True ) # Decode the answer answer = tokenizer.decode(outputs[0], skip_special_tokens=True) return answer # Example usage context = """Ey iman gətirənlər! Səbir edərək və namaz qılaraq Allahdan kömək istəyin! Şübhəsiz ki, Allah səbir edənlərlədir. Ey iman gətirənlər, əgər yalnız Allaha ibadət edirsinizsə, sizə verdiyimiz ruzilərin təmiz olanlarından yeyin və Ona şükür edin! Ey iman gətirənlər! Oruc tutmaq sizdən əvvəlkilərə fərz buyurulduğu kimi sizə də fərz buyuruldu ki, bəlkə, günahlardan çəkinəsiniz! Ey iman gətirənlər, bütövlüklə İslama daxil olun! Şeytanın addımlarını izləməyin! Şübhəsiz ki, o sizin açıq-aydın düşməninizdir. Ey iman gətirənlər! Həqiqətən, içki, qumar, bütlər və fal oxları şeytan əməli olan murdar şeylərdir. Bunlardan uzaq durun ki, bəlkə nicat tapasınız! Ey iman gətirənlər! “Allah yolunda döyüşə çıxın!” - deyildiyi zaman sizə nə oldu ki, yerə yapışıb qaldınız. Axirətin əvəzinə dünya həyatına razı oldunuz? Dünya malı axirət nemətlərinin yanında çox az bir şeydir. Həqiqətən, insan çox zalım və çox nankordur.""" question = "Allah kiminlədir ?" answer = answer_question(context, question) print(answer) ``` ## License This model licensed under the CC BY-NC-ND 4.0 license. What does this license allow? Attribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made. Non-Commercial: You may not use the material for commercial purposes. No Derivatives: If you remix, transform, or build upon the material, you may not distribute the modified material. For more information, please refer to the <a target="_blank" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND 4.0 license</a>. ## Citation ``` @misc {localdoc_2024, author = { {LocalDoc} }, title = { mbart_large_qa_azerbaijan (Revision 7484a07) }, year = 2024, url = { https://huggingface.co/LocalDoc/mbart_large_qa_azerbaijan }, doi = { 10.57967/hf/2541 }, publisher = { Hugging Face } } ``` ## Contact For more information, questions, or issues, please contact LocalDoc at [[email protected]].
{"datasets": ["LocalDoc/LDQuAd"], "language": ["az"], "library_name": "transformers", "license": "cc-by-nc-4.0", "pipeline_tag": "question-answering", "tags": ["mBART Large for Question Answering in Azerbaijani"]}
task
[ "QUESTION_ANSWERING" ]
42,978
wannaphong/Roman2Thai-transliterator
wannaphong
translation
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "dataset:thai2rom-v2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-05-23T05:26:58Z
2022-06-26T15:02:19+00:00
135
0
--- datasets: - thai2rom-v2 license: apache-2.0 metrics: - cer tags: - translation widget: - text: maiphai --- # Roman-Thai Transliterator by Transformer Models GitHub: https://github.com/wannaphong/thai2rom-v2/tree/main/roman2thai-transformer
null
Non_BioNLP
# Roman-Thai Transliterator by Transformer Models GitHub: https://github.com/wannaphong/thai2rom-v2/tree/main/roman2thai-transformer
{"datasets": ["thai2rom-v2"], "license": "apache-2.0", "metrics": ["cer"], "tags": ["translation"], "widget": [{"text": "maiphai"}]}
task
[ "TRANSLATION" ]
42,979
Helsinki-NLP/opus-mt-dra-en
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ta", "kn", "ml", "te", "dra", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:28:33+00:00
76
1
--- language: - ta - kn - ml - te - dra - en license: apache-2.0 tags: - translation --- ### dra-eng * source group: Dravidian languages * target group: English * OPUS readme: [dra-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md) * model: transformer * source language(s): kan mal tam tel * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kan-eng.kan.eng | 9.1 | 0.312 | | Tatoeba-test.mal-eng.mal.eng | 42.0 | 0.584 | | Tatoeba-test.multi.eng | 30.0 | 0.493 | | Tatoeba-test.tam-eng.tam.eng | 30.2 | 0.467 | | Tatoeba-test.tel-eng.tel.eng | 15.9 | 0.378 | ### System Info: - hf_name: dra-eng - source_languages: dra - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ta', 'kn', 'ml', 'te', 'dra', 'en'] - src_constituents: {'tam', 'kan', 'mal', 'tel'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt - src_alpha3: dra - tgt_alpha3: eng - short_pair: dra-en - chrF2_score: 0.493 - bleu: 30.0 - brevity_penalty: 1.0 - ref_len: 10641.0 - src_name: Dravidian languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: dra - tgt_alpha2: en - prefer_old: False - long_pair: dra-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
null
Non_BioNLP
### dra-eng * source group: Dravidian languages * target group: English * OPUS readme: [dra-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md) * model: transformer * source language(s): kan mal tam tel * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kan-eng.kan.eng | 9.1 | 0.312 | | Tatoeba-test.mal-eng.mal.eng | 42.0 | 0.584 | | Tatoeba-test.multi.eng | 30.0 | 0.493 | | Tatoeba-test.tam-eng.tam.eng | 30.2 | 0.467 | | Tatoeba-test.tel-eng.tel.eng | 15.9 | 0.378 | ### System Info: - hf_name: dra-eng - source_languages: dra - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ta', 'kn', 'ml', 'te', 'dra', 'en'] - src_constituents: {'tam', 'kan', 'mal', 'tel'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt - src_alpha3: dra - tgt_alpha3: eng - short_pair: dra-en - chrF2_score: 0.493 - bleu: 30.0 - brevity_penalty: 1.0 - ref_len: 10641.0 - src_name: Dravidian languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: dra - tgt_alpha2: en - prefer_old: False - long_pair: dra-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ta", "kn", "ml", "te", "dra", "en"], "license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
42,980
nandavikas16/H2-keywordextractor-finetuned-scope-summarization
nandavikas16
text2text-generation
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:transformer3/H2-keywordextractor", "base_model:finetune:transformer3/H2-keywordextractor", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-03-09T19:54:21Z
2024-04-13T22:10:46+00:00
8
0
--- base_model: transformer3/H2-keywordextractor metrics: - rouge tags: - generated_from_trainer model-index: - name: H2-keywordextractor-finetuned-scope-summarization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # H2-keywordextractor-finetuned-scope-summarization This model is a fine-tuned version of [transformer3/H2-keywordextractor](https://huggingface.co/transformer3/H2-keywordextractor) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2073 - Rouge1: 13.0222 - Rouge2: 10.4851 - Rougel: 13.0872 - Rougelsum: 13.1095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 15 - eval_batch_size: 15 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.8852 | 1.0 | 23 | 0.3103 | 10.3278 | 6.2988 | 10.3528 | 10.3293 | | 0.2901 | 2.0 | 46 | 0.2825 | 10.8308 | 7.5214 | 10.8428 | 10.8103 | | 0.2625 | 3.0 | 69 | 0.2711 | 12.0182 | 8.6415 | 12.0115 | 12.0537 | | 0.2453 | 4.0 | 92 | 0.2550 | 12.9535 | 9.6936 | 12.9952 | 13.0384 | | 0.2353 | 5.0 | 115 | 0.2464 | 11.2808 | 7.8603 | 11.3196 | 11.281 | | 0.2338 | 6.0 | 138 | 0.2389 | 12.6604 | 9.6355 | 12.6519 | 12.6377 | | 0.2183 | 7.0 | 161 | 0.2307 | 13.2591 | 10.6628 | 13.2399 | 13.2554 | | 0.2143 | 8.0 | 184 | 0.2252 | 13.537 | 11.1632 | 13.5668 | 13.5957 | | 0.2055 | 9.0 | 207 | 0.2206 | 13.7032 | 11.6575 | 13.7226 | 13.774 | | 0.2022 | 10.0 | 230 | 0.2158 | 13.7727 | 11.5365 | 13.7404 | 13.8018 | | 0.1961 | 11.0 | 253 | 0.2166 | 13.4062 | 11.2919 | 13.4698 | 13.4854 | | 0.2018 | 12.0 | 276 | 0.2116 | 13.8406 | 11.852 | 13.8309 | 13.8995 | | 0.1946 | 13.0 | 299 | 0.2131 | 12.5757 | 9.5775 | 12.5738 | 12.6535 | | 0.1943 | 14.0 | 322 | 0.2142 | 11.617 | 9.0291 | 11.5311 | 11.7201 | | 0.2068 | 15.0 | 345 | 0.2080 | 12.9136 | 10.2865 | 12.9659 | 12.9787 | | 0.2051 | 16.0 | 368 | 0.2041 | 13.6492 | 11.6388 | 13.6506 | 13.7041 | | 0.1887 | 17.0 | 391 | 0.2119 | 11.4317 | 8.2482 | 11.386 | 11.4313 | | 0.1886 | 18.0 | 414 | 0.2097 | 13.0287 | 10.6547 | 13.0829 | 13.118 | | 0.1887 | 19.0 | 437 | 0.2079 | 13.0073 | 10.5381 | 13.0514 | 13.1089 | | 0.186 | 20.0 | 460 | 0.2073 | 13.0222 | 10.4851 | 13.0872 | 13.1095 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # H2-keywordextractor-finetuned-scope-summarization This model is a fine-tuned version of [transformer3/H2-keywordextractor](https://huggingface.co/transformer3/H2-keywordextractor) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2073 - Rouge1: 13.0222 - Rouge2: 10.4851 - Rougel: 13.0872 - Rougelsum: 13.1095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 15 - eval_batch_size: 15 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.8852 | 1.0 | 23 | 0.3103 | 10.3278 | 6.2988 | 10.3528 | 10.3293 | | 0.2901 | 2.0 | 46 | 0.2825 | 10.8308 | 7.5214 | 10.8428 | 10.8103 | | 0.2625 | 3.0 | 69 | 0.2711 | 12.0182 | 8.6415 | 12.0115 | 12.0537 | | 0.2453 | 4.0 | 92 | 0.2550 | 12.9535 | 9.6936 | 12.9952 | 13.0384 | | 0.2353 | 5.0 | 115 | 0.2464 | 11.2808 | 7.8603 | 11.3196 | 11.281 | | 0.2338 | 6.0 | 138 | 0.2389 | 12.6604 | 9.6355 | 12.6519 | 12.6377 | | 0.2183 | 7.0 | 161 | 0.2307 | 13.2591 | 10.6628 | 13.2399 | 13.2554 | | 0.2143 | 8.0 | 184 | 0.2252 | 13.537 | 11.1632 | 13.5668 | 13.5957 | | 0.2055 | 9.0 | 207 | 0.2206 | 13.7032 | 11.6575 | 13.7226 | 13.774 | | 0.2022 | 10.0 | 230 | 0.2158 | 13.7727 | 11.5365 | 13.7404 | 13.8018 | | 0.1961 | 11.0 | 253 | 0.2166 | 13.4062 | 11.2919 | 13.4698 | 13.4854 | | 0.2018 | 12.0 | 276 | 0.2116 | 13.8406 | 11.852 | 13.8309 | 13.8995 | | 0.1946 | 13.0 | 299 | 0.2131 | 12.5757 | 9.5775 | 12.5738 | 12.6535 | | 0.1943 | 14.0 | 322 | 0.2142 | 11.617 | 9.0291 | 11.5311 | 11.7201 | | 0.2068 | 15.0 | 345 | 0.2080 | 12.9136 | 10.2865 | 12.9659 | 12.9787 | | 0.2051 | 16.0 | 368 | 0.2041 | 13.6492 | 11.6388 | 13.6506 | 13.7041 | | 0.1887 | 17.0 | 391 | 0.2119 | 11.4317 | 8.2482 | 11.386 | 11.4313 | | 0.1886 | 18.0 | 414 | 0.2097 | 13.0287 | 10.6547 | 13.0829 | 13.118 | | 0.1887 | 19.0 | 437 | 0.2079 | 13.0073 | 10.5381 | 13.0514 | 13.1089 | | 0.186 | 20.0 | 460 | 0.2073 | 13.0222 | 10.4851 | 13.0872 | 13.1095 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"base_model": "transformer3/H2-keywordextractor", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "H2-keywordextractor-finetuned-scope-summarization", "results": []}]}
task
[ "SUMMARIZATION" ]
42,981
gaudi/opus-mt-bem-fi-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-17T15:19:25Z
2024-10-18T23:14:56+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-bem-fi) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-bem-fi).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-bem-fi --output_dir ./ctranslate2/opus-mt-bem-fi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-bem-fi-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-bem-fi-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-bem-fi-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-bem-fi) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-bem-fi) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-bem-fi).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-bem-fi --output_dir ./ctranslate2/opus-mt-bem-fi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-bem-fi-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-bem-fi-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-bem-fi-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-bem-fi) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
42,982
jondoe888/distillation_test
jondoe888
text-classification
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "autotrain", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-13T02:35:30Z
2025-04-08T18:13:12+00:00
9
0
--- base_model: FacebookAI/roberta-base library_name: transformers tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.5938536524772644 f1_macro: 0.7222515953648029 f1_micro: 0.773 f1_weighted: 0.7597654727348124 precision_macro: 0.7867675237321506 precision_micro: 0.773 precision_weighted: 0.7868723492084404 recall_macro: 0.7150820510852743 recall_micro: 0.773 recall_weighted: 0.773 accuracy: 0.773
null
Non_BioNLP
# Model Trained Using AutoTrain ## Validation Metrics loss: 0.5938536524772644 f1_macro: 0.7222515953648029 f1_micro: 0.773 f1_weighted: 0.7597654727348124 precision_macro: 0.7867675237321506 precision_micro: 0.773 precision_weighted: 0.7868723492084404 recall_macro: 0.7150820510852743 recall_micro: 0.773 recall_weighted: 0.773 accuracy: 0.773
{"library_name": "transformers", "tags": ["autotrain"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
42,983
Aimlab/xlm-roberta-base-finetuned-urdu
Aimlab
text-classification
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "ur", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-07-22T15:55:47Z
2022-07-25T07:58:10+00:00
78
2
--- language: ur license: afl-3.0 --- # XLM-RoBERTa-Urdu-Classification This [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) text classification model trained on Urdu sentiment [data-set](https://huggingface.co/datasets/hassan4830/urdu-binary-classification-data) performs binary sentiment classification on any given Urdu sentence. The model has been fine-tuned for better results in manageable time frames. ## Model description XLM-RoBERTa is a scaled cross-lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross-lingual benchmarks. The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. ### How to use You can import this model directly from the transformers library: ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("Aimlab/xlm-roberta-base-finetuned-urdu") >>> model = AutoModelForSequenceClassification.from_pretrained("Aimlab/xlm-roberta-base-finetuned-urdu", id2label = {0: 'negative', 1: 'positive'}) ``` Here is how to use this model to get the label of a given text: ```python >>> from transformers import TextClassificationPipeline >>> text = "وہ ایک برا شخص ہے" >>> pipe = TextClassificationPipeline(model = model, tokenizer = tokenizer, top_k = 2, device = 0) >>> pipe(text) [{'label': 'negative', 'score': 0.9987003803253174}, {'label': 'positive', 'score': 0.001299630501307547}] ```
null
Non_BioNLP
# XLM-RoBERTa-Urdu-Classification This [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) text classification model trained on Urdu sentiment [data-set](https://huggingface.co/datasets/hassan4830/urdu-binary-classification-data) performs binary sentiment classification on any given Urdu sentence. The model has been fine-tuned for better results in manageable time frames. ## Model description XLM-RoBERTa is a scaled cross-lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross-lingual benchmarks. The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. ### How to use You can import this model directly from the transformers library: ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("Aimlab/xlm-roberta-base-finetuned-urdu") >>> model = AutoModelForSequenceClassification.from_pretrained("Aimlab/xlm-roberta-base-finetuned-urdu", id2label = {0: 'negative', 1: 'positive'}) ``` Here is how to use this model to get the label of a given text: ```python >>> from transformers import TextClassificationPipeline >>> text = "وہ ایک برا شخص ہے" >>> pipe = TextClassificationPipeline(model = model, tokenizer = tokenizer, top_k = 2, device = 0) >>> pipe(text) [{'label': 'negative', 'score': 0.9987003803253174}, {'label': 'positive', 'score': 0.001299630501307547}] ```
{"language": "ur", "license": "afl-3.0"}
task
[ "TEXT_CLASSIFICATION" ]
42,984
gokuls/hbertv1-tiny-wt-48-Massive-intent
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:massive", "base_model:gokuls/model_v1_complete_training_wt_init_48_tiny", "base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_tiny", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-21T18:16:50Z
2023-07-21T18:19:49+00:00
10
0
--- base_model: gokuls/model_v1_complete_training_wt_init_48_tiny datasets: - massive metrics: - accuracy tags: - generated_from_trainer model-index: - name: hbertv1-tiny-wt-48-Massive-intent results: - task: type: text-classification name: Text Classification dataset: name: massive type: massive config: en-US split: validation args: en-US metrics: - type: accuracy value: 0.7722577471716675 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-tiny-wt-48-Massive-intent This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_tiny](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_tiny) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.8676 - Accuracy: 0.7723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.7161 | 1.0 | 180 | 3.1936 | 0.2499 | | 2.8544 | 2.0 | 360 | 2.3660 | 0.4058 | | 2.2122 | 3.0 | 540 | 1.8566 | 0.5430 | | 1.7979 | 4.0 | 720 | 1.5269 | 0.6370 | | 1.5083 | 5.0 | 900 | 1.3016 | 0.6911 | | 1.3044 | 6.0 | 1080 | 1.1672 | 0.7098 | | 1.1652 | 7.0 | 1260 | 1.0709 | 0.7270 | | 1.0703 | 8.0 | 1440 | 1.0045 | 0.7432 | | 0.996 | 9.0 | 1620 | 0.9595 | 0.7511 | | 0.9323 | 10.0 | 1800 | 0.9276 | 0.7550 | | 0.8832 | 11.0 | 1980 | 0.9183 | 0.7565 | | 0.8521 | 12.0 | 2160 | 0.8953 | 0.7649 | | 0.8246 | 13.0 | 2340 | 0.8829 | 0.7649 | | 0.8072 | 14.0 | 2520 | 0.8676 | 0.7723 | | 0.7947 | 15.0 | 2700 | 0.8657 | 0.7708 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-tiny-wt-48-Massive-intent This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_tiny](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_tiny) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.8676 - Accuracy: 0.7723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.7161 | 1.0 | 180 | 3.1936 | 0.2499 | | 2.8544 | 2.0 | 360 | 2.3660 | 0.4058 | | 2.2122 | 3.0 | 540 | 1.8566 | 0.5430 | | 1.7979 | 4.0 | 720 | 1.5269 | 0.6370 | | 1.5083 | 5.0 | 900 | 1.3016 | 0.6911 | | 1.3044 | 6.0 | 1080 | 1.1672 | 0.7098 | | 1.1652 | 7.0 | 1260 | 1.0709 | 0.7270 | | 1.0703 | 8.0 | 1440 | 1.0045 | 0.7432 | | 0.996 | 9.0 | 1620 | 0.9595 | 0.7511 | | 0.9323 | 10.0 | 1800 | 0.9276 | 0.7550 | | 0.8832 | 11.0 | 1980 | 0.9183 | 0.7565 | | 0.8521 | 12.0 | 2160 | 0.8953 | 0.7649 | | 0.8246 | 13.0 | 2340 | 0.8829 | 0.7649 | | 0.8072 | 14.0 | 2520 | 0.8676 | 0.7723 | | 0.7947 | 15.0 | 2700 | 0.8657 | 0.7708 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
{"base_model": "gokuls/model_v1_complete_training_wt_init_48_tiny", "datasets": ["massive"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hbertv1-tiny-wt-48-Massive-intent", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "massive", "type": "massive", "config": "en-US", "split": "validation", "args": "en-US"}, "metrics": [{"type": "accuracy", "value": 0.7722577471716675, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,985
hiba2/results
hiba2
text2text-generation
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:malmarjeh/t5-arabic-text-summarization", "base_model:finetune:malmarjeh/t5-arabic-text-summarization", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-03-04T21:43:19Z
2024-03-17T00:44:32+00:00
10
0
--- base_model: malmarjeh/t5-arabic-text-summarization metrics: - rouge tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [malmarjeh/t5-arabic-text-summarization](https://huggingface.co/malmarjeh/t5-arabic-text-summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0104 - Rouge1: 0.1382 - Rouge2: 0.0187 - Rougel: 0.1382 - Rougelsum: 0.1382 - Gen Len: 18.9404 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0338 | 0.23 | 500 | 0.0175 | 0.1514 | 0.0297 | 0.1511 | 0.1518 | 18.9188 | | 0.0566 | 0.46 | 1000 | 0.0161 | 0.1565 | 0.0388 | 0.157 | 0.1573 | 18.9188 | | 0.0418 | 0.7 | 1500 | 0.0125 | 0.1372 | 0.0199 | 0.1375 | 0.1379 | 18.8105 | | 0.0333 | 0.93 | 2000 | 0.0116 | 0.1443 | 0.0253 | 0.1448 | 0.1448 | 18.8051 | | 0.0287 | 1.16 | 2500 | 0.0110 | 0.144 | 0.0192 | 0.1442 | 0.1442 | 19.0 | | 0.0247 | 1.39 | 3000 | 0.0096 | 0.1511 | 0.024 | 0.1517 | 0.1518 | 19.0 | | 0.0219 | 1.62 | 3500 | 0.0087 | 0.1463 | 0.0241 | 0.1462 | 0.1462 | 18.9747 | | 0.021 | 1.86 | 4000 | 0.0104 | 0.1382 | 0.0187 | 0.1382 | 0.1382 | 18.9404 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [malmarjeh/t5-arabic-text-summarization](https://huggingface.co/malmarjeh/t5-arabic-text-summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0104 - Rouge1: 0.1382 - Rouge2: 0.0187 - Rougel: 0.1382 - Rougelsum: 0.1382 - Gen Len: 18.9404 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0338 | 0.23 | 500 | 0.0175 | 0.1514 | 0.0297 | 0.1511 | 0.1518 | 18.9188 | | 0.0566 | 0.46 | 1000 | 0.0161 | 0.1565 | 0.0388 | 0.157 | 0.1573 | 18.9188 | | 0.0418 | 0.7 | 1500 | 0.0125 | 0.1372 | 0.0199 | 0.1375 | 0.1379 | 18.8105 | | 0.0333 | 0.93 | 2000 | 0.0116 | 0.1443 | 0.0253 | 0.1448 | 0.1448 | 18.8051 | | 0.0287 | 1.16 | 2500 | 0.0110 | 0.144 | 0.0192 | 0.1442 | 0.1442 | 19.0 | | 0.0247 | 1.39 | 3000 | 0.0096 | 0.1511 | 0.024 | 0.1517 | 0.1518 | 19.0 | | 0.0219 | 1.62 | 3500 | 0.0087 | 0.1463 | 0.0241 | 0.1462 | 0.1462 | 18.9747 | | 0.021 | 1.86 | 4000 | 0.0104 | 0.1382 | 0.0187 | 0.1382 | 0.1382 | 18.9404 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"base_model": "malmarjeh/t5-arabic-text-summarization", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "results", "results": []}]}
task
[ "SUMMARIZATION" ]
42,986
MaLA-LM/lucky52-bloom-7b1-no-16
MaLA-LM
text-generation
[ "transformers", "pytorch", "bloom", "text-generation", "generation", "question answering", "instruction tuning", "multilingual", "dataset:MBZUAI/Bactrian-X", "arxiv:2404.04850", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-04T09:05:45Z
2025-04-08T17:05:13+00:00
8
0
--- datasets: - MBZUAI/Bactrian-X language: - multilingual library_name: transformers license: cc-by-nc-4.0 pipeline_tag: text-generation tags: - generation - question answering - instruction tuning --- ### Model Description This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages. We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks. Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details. * Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1) * Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew * Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he * Training method: full-parameter fine-tuning. ### Usage The model checkpoint should be loaded using `transformers` library. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-16") model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-16") ``` ### Citation ``` @inproceedings{ji2025lucky52, title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM}, author={Shaoxiong Ji and Pinzhen Chen}, year={2025}, booktitle={Proceedings of COLING}, url={https://arxiv.org/abs/2404.04850}, } ```
null
Non_BioNLP
### Model Description This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages. We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks. Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details. * Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1) * Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew * Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he * Training method: full-parameter fine-tuning. ### Usage The model checkpoint should be loaded using `transformers` library. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-16") model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-16") ``` ### Citation ``` @inproceedings{ji2025lucky52, title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM}, author={Shaoxiong Ji and Pinzhen Chen}, year={2025}, booktitle={Proceedings of COLING}, url={https://arxiv.org/abs/2404.04850}, } ```
{"datasets": ["MBZUAI/Bactrian-X"], "language": ["multilingual"], "library_name": "transformers", "license": "cc-by-nc-4.0", "pipeline_tag": "text-generation", "tags": ["generation", "question answering", "instruction tuning"]}
task
[ "QUESTION_ANSWERING" ]
42,987
minuva/MiniLMv2-goemotions-v2-onnx
minuva
text-classification
[ "transformers", "onnx", "roberta", "text-classification", "emotions", "multi-class-classification", "multi-label-classification", "int8", "emotion", "ONNXRuntime", "en", "dataset:go_emotions", "license:apache-2.0", "model-index", "autotrain_compatible", "region:us" ]
2023-12-14T14:16:16Z
2024-04-24T11:13:31+00:00
21
2
--- datasets: - go_emotions language: - en library_name: transformers license: apache-2.0 tags: - emotions - multi-class-classification - multi-label-classification - onnx - int8 - emotion - ONNXRuntime inference: false model-index: - name: text-classification-goemotions results: - task: type: text-classification name: Text Classification dataset: name: go_emotions type: multilabel_classification config: simplified split: test args: simplified metrics: - type: f1 value: 0.482 name: F1 --- # Text Classification GoEmotions This a ONNX quantized model and is fined-tuned version of [MiniLMv2-L6-H384](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the on the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset using [tasinho/text-classification-goemotions](https://huggingface.co/tasinhoque/text-classification-goemotions) as teacher model. The original model can be found [here](https://huggingface.co/minuva/MiniLMv2-goemotions-v2) # Optimum ## Installation Install from source: ```bash python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git ``` ## Run the Model ```py from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer, pipeline model = ORTModelForSequenceClassification.from_pretrained('minuva/MiniLMv2-goemotions-v2-onnx', provider="CPUExecutionProvider") tokenizer = AutoTokenizer.from_pretrained('minuva/MiniLMv2-goemotions-v2-onnx', use_fast=True, model_max_length=256, truncation=True, padding='max_length') pipe = pipeline(task='text-classification', model=model, tokenizer=tokenizer, ) texts = ["that's wrong", "can you please answer me?"] pipe(texts) # [{'label': 'anger', 'score': 0.9727636575698853}, # {'label': 'love', 'score': 0.9874765276908875}] ``` # ONNX Runtime only A lighter solution for deployment ## Installation ```bash pip install tokenizers pip install onnxruntime git clone https://huggingface.co/minuva/MiniLMv2-goemotions-v2-onnx ``` ## Run the Model ```py import os import numpy as np import json from tokenizers import Tokenizer from onnxruntime import InferenceSession model_name = "minuva/MiniLMv2-goemotions-v2-onnx" tokenizer = Tokenizer.from_pretrained(model_name) tokenizer.enable_padding( pad_token="<pad>", pad_id=1, ) tokenizer.enable_truncation(max_length=256) batch_size = 16 texts = ["I am angry", "I feel in love"] outputs = [] model = InferenceSession("MiniLMv2-goemotions-v2-onnx/model_optimized_quantized.onnx", providers=['CUDAExecutionProvider']) with open(os.path.join("MiniLMv2-goemotions-v2-onnx", "config.json"), "r") as f: config = json.load(f) output_names = [output.name for output in model.get_outputs()] input_names = [input.name for input in model.get_inputs()] for subtexts in np.array_split(np.array(texts), len(texts) // batch_size + 1): encodings = tokenizer.encode_batch(list(subtexts)) inputs = { "input_ids": np.vstack( [encoding.ids for encoding in encodings], ), "attention_mask": np.vstack( [encoding.attention_mask for encoding in encodings], ), "token_type_ids": np.vstack( [encoding.type_ids for encoding in encodings], ), } for input_name in input_names: if input_name not in inputs: raise ValueError(f"Input name {input_name} not found in inputs") inputs = {input_name: inputs[input_name] for input_name in input_names} output = np.squeeze( np.stack( model.run(output_names=output_names, input_feed=inputs) ), axis=0, ) outputs.append(output) outputs = np.concatenate(outputs, axis=0) scores = 1 / (1 + np.exp(-outputs)) results = [] for item in scores: labels = [] scores = [] for idx, s in enumerate(item): labels.append(config["id2label"][str(idx)]) scores.append(float(s)) results.append({"labels": labels, "scores": scores}) res = [] for result in results: joined = list(zip(result['labels'], result['scores'])) max_score = max(joined, key=lambda x: x[1]) res.append(max_score) res # [('anger', 0.9745745062828064), ('love', 0.9884329438209534)] ``` # Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear # Metrics (comparison with teacher model) | Teacher (params) | Student (params) | Set | Score (teacher) | Score (student) | |--------------------|-------------|----------|--------| --------| | tasinhoque/text-classification-goemotions (355M) | MiniLMv2-goemotions-v2-onnx (30M) | Validation | 0.514252 | 0.4780 | | tasinhoque/text-classification-goemotions (335M) | MiniLMv2-goemotions-v2-onnx (30M) | Test | 0.501937 | 0.482 | # Deployment Check out our [fast-nlp-text-emotion repository](https://github.com/minuva/fast-nlp-text-emotion) for a FastAPI based server to easily deploy this model on CPU devices.
null
Non_BioNLP
# Text Classification GoEmotions This a ONNX quantized model and is fined-tuned version of [MiniLMv2-L6-H384](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the on the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset using [tasinho/text-classification-goemotions](https://huggingface.co/tasinhoque/text-classification-goemotions) as teacher model. The original model can be found [here](https://huggingface.co/minuva/MiniLMv2-goemotions-v2) # Optimum ## Installation Install from source: ```bash python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git ``` ## Run the Model ```py from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer, pipeline model = ORTModelForSequenceClassification.from_pretrained('minuva/MiniLMv2-goemotions-v2-onnx', provider="CPUExecutionProvider") tokenizer = AutoTokenizer.from_pretrained('minuva/MiniLMv2-goemotions-v2-onnx', use_fast=True, model_max_length=256, truncation=True, padding='max_length') pipe = pipeline(task='text-classification', model=model, tokenizer=tokenizer, ) texts = ["that's wrong", "can you please answer me?"] pipe(texts) # [{'label': 'anger', 'score': 0.9727636575698853}, # {'label': 'love', 'score': 0.9874765276908875}] ``` # ONNX Runtime only A lighter solution for deployment ## Installation ```bash pip install tokenizers pip install onnxruntime git clone https://huggingface.co/minuva/MiniLMv2-goemotions-v2-onnx ``` ## Run the Model ```py import os import numpy as np import json from tokenizers import Tokenizer from onnxruntime import InferenceSession model_name = "minuva/MiniLMv2-goemotions-v2-onnx" tokenizer = Tokenizer.from_pretrained(model_name) tokenizer.enable_padding( pad_token="<pad>", pad_id=1, ) tokenizer.enable_truncation(max_length=256) batch_size = 16 texts = ["I am angry", "I feel in love"] outputs = [] model = InferenceSession("MiniLMv2-goemotions-v2-onnx/model_optimized_quantized.onnx", providers=['CUDAExecutionProvider']) with open(os.path.join("MiniLMv2-goemotions-v2-onnx", "config.json"), "r") as f: config = json.load(f) output_names = [output.name for output in model.get_outputs()] input_names = [input.name for input in model.get_inputs()] for subtexts in np.array_split(np.array(texts), len(texts) // batch_size + 1): encodings = tokenizer.encode_batch(list(subtexts)) inputs = { "input_ids": np.vstack( [encoding.ids for encoding in encodings], ), "attention_mask": np.vstack( [encoding.attention_mask for encoding in encodings], ), "token_type_ids": np.vstack( [encoding.type_ids for encoding in encodings], ), } for input_name in input_names: if input_name not in inputs: raise ValueError(f"Input name {input_name} not found in inputs") inputs = {input_name: inputs[input_name] for input_name in input_names} output = np.squeeze( np.stack( model.run(output_names=output_names, input_feed=inputs) ), axis=0, ) outputs.append(output) outputs = np.concatenate(outputs, axis=0) scores = 1 / (1 + np.exp(-outputs)) results = [] for item in scores: labels = [] scores = [] for idx, s in enumerate(item): labels.append(config["id2label"][str(idx)]) scores.append(float(s)) results.append({"labels": labels, "scores": scores}) res = [] for result in results: joined = list(zip(result['labels'], result['scores'])) max_score = max(joined, key=lambda x: x[1]) res.append(max_score) res # [('anger', 0.9745745062828064), ('love', 0.9884329438209534)] ``` # Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear # Metrics (comparison with teacher model) | Teacher (params) | Student (params) | Set | Score (teacher) | Score (student) | |--------------------|-------------|----------|--------| --------| | tasinhoque/text-classification-goemotions (355M) | MiniLMv2-goemotions-v2-onnx (30M) | Validation | 0.514252 | 0.4780 | | tasinhoque/text-classification-goemotions (335M) | MiniLMv2-goemotions-v2-onnx (30M) | Test | 0.501937 | 0.482 | # Deployment Check out our [fast-nlp-text-emotion repository](https://github.com/minuva/fast-nlp-text-emotion) for a FastAPI based server to easily deploy this model on CPU devices.
{"datasets": ["go_emotions"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "tags": ["emotions", "multi-class-classification", "multi-label-classification", "onnx", "int8", "emotion", "ONNXRuntime"], "inference": false, "model-index": [{"name": "text-classification-goemotions", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "go_emotions", "type": "multilabel_classification", "config": "simplified", "split": "test", "args": "simplified"}, "metrics": [{"type": "f1", "value": 0.482, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,988
jilangdi/all-mpnet-base-v2-sts
jilangdi
sentence-similarity
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5749", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-11T06:50:30Z
2024-06-11T06:52:39+00:00
9
1
--- base_model: sentence-transformers/all-mpnet-base-v2 datasets: [] language: [] library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:5749 - loss:CosineSimilarityLoss widget: - source_sentence: A chef is preparing some food. sentences: - Five birds stand on the snow. - A chef prepared a meal. - There is no 'still' that is not relative to some other object. - source_sentence: A woman is adding oil on fishes. sentences: - Large cruise ship floating on the water. - It refers to the maximum f-stop (which is defined as the ratio of focal length to effective aperture diameter). - The woman is cutting potatoes. - source_sentence: The player shoots the winning points. sentences: - Minimum wage laws hurt the least skilled, least productive the most. - The basketball player is about to score points for his team. - Three televisions, on on the floor, the other two on a box. - source_sentence: Stars form in star-formation regions, which itself develop from molecular clouds. sentences: - Although I believe Searle is mistaken, I don't think you have found the problem. - It may be possible for a solar system like ours to exist outside of a galaxy. - A blond-haired child performing on the trumpet in front of a house while his younger brother watches. - source_sentence: While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign. sentences: - At first, I thought this is a bit of a tricky question. - A man plays the guitar. - There is a very good reason not to refer to the Queen's spouse as "King" - because they aren't the King. co2_eq_emissions: emissions: 93.43541854818658 energy_consumed: 0.1749782643795525 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: AMD EPYC 7H12 64-Core Processor ram_total_size: 229.14864349365234 hours_used: 0.143 hardware_used: 8 x NVIDIA GeForce RTX 3090 model-index: - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts mpnet dev type: sts-mpnet-dev metrics: - type: pearson_cosine value: 0.9082003751911006 name: Pearson Cosine - type: spearman_cosine value: 0.9068573348364105 name: Spearman Cosine - type: pearson_manhattan value: 0.8991806648044287 name: Pearson Manhattan - type: spearman_manhattan value: 0.905893937799947 name: Spearman Manhattan - type: pearson_euclidean value: 0.8998094542860591 name: Pearson Euclidean - type: spearman_euclidean value: 0.9068573340303147 name: Spearman Euclidean - type: pearson_dot value: 0.9082003757684081 name: Pearson Dot - type: spearman_dot value: 0.9068573340303147 name: Spearman Dot - type: pearson_max value: 0.9082003757684081 name: Pearson Max - type: spearman_max value: 0.9068573348364105 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts mpnet test type: sts-mpnet-test metrics: - type: pearson_cosine value: 0.8755366875955196 name: Pearson Cosine - type: spearman_cosine value: 0.8736528954082163 name: Spearman Cosine - type: pearson_manhattan value: 0.8667069486775929 name: Pearson Manhattan - type: spearman_manhattan value: 0.872611925172738 name: Spearman Manhattan - type: pearson_euclidean value: 0.8673923632809148 name: Pearson Euclidean - type: spearman_euclidean value: 0.8736530053628437 name: Spearman Euclidean - type: pearson_dot value: 0.8755366834981875 name: Pearson Dot - type: spearman_dot value: 0.8736528954082163 name: Spearman Dot - type: pearson_max value: 0.8755366875955196 name: Pearson Max - type: spearman_max value: 0.8736530053628437 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d --> - **Maximum Sequence Length:** 384 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("jilangdi/all-mpnet-base-v2-sts") # Run inference sentences = [ 'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.', 'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.', 'A man plays the guitar.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-mpnet-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.9082 | | **spearman_cosine** | **0.9069** | | pearson_manhattan | 0.8992 | | spearman_manhattan | 0.9059 | | pearson_euclidean | 0.8998 | | spearman_euclidean | 0.9069 | | pearson_dot | 0.9082 | | spearman_dot | 0.9069 | | pearson_max | 0.9082 | | spearman_max | 0.9069 | #### Semantic Similarity * Dataset: `sts-mpnet-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8755 | | **spearman_cosine** | **0.8737** | | pearson_manhattan | 0.8667 | | spearman_manhattan | 0.8726 | | pearson_euclidean | 0.8674 | | spearman_euclidean | 0.8737 | | pearson_dot | 0.8755 | | spearman_dot | 0.8737 | | pearson_max | 0.8755 | | spearman_max | 0.8737 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------|:------------------------------------------------------|:------------------| | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> | | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> | | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | sts-mpnet-dev_spearman_cosine | sts-mpnet-test_spearman_cosine | |:------:|:----:|:-------------:|:------:|:-----------------------------:|:------------------------------:| | 2.2222 | 100 | 0.015 | 0.0182 | 0.9033 | - | | 4.4444 | 200 | 0.0051 | 0.0181 | 0.9069 | - | | 5.0 | 225 | - | - | - | 0.8737 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.175 kWh - **Carbon Emitted**: 0.093 kg of CO2 - **Hours Used**: 0.143 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 8 x NVIDIA GeForce RTX 3090 - **CPU Model**: AMD EPYC 7H12 64-Core Processor - **RAM Size**: 229.15 GB ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.1+cu121 - Accelerate: 0.31.0 - Datasets: 2.19.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d --> - **Maximum Sequence Length:** 384 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("jilangdi/all-mpnet-base-v2-sts") # Run inference sentences = [ 'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.', 'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.', 'A man plays the guitar.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-mpnet-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.9082 | | **spearman_cosine** | **0.9069** | | pearson_manhattan | 0.8992 | | spearman_manhattan | 0.9059 | | pearson_euclidean | 0.8998 | | spearman_euclidean | 0.9069 | | pearson_dot | 0.9082 | | spearman_dot | 0.9069 | | pearson_max | 0.9082 | | spearman_max | 0.9069 | #### Semantic Similarity * Dataset: `sts-mpnet-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8755 | | **spearman_cosine** | **0.8737** | | pearson_manhattan | 0.8667 | | spearman_manhattan | 0.8726 | | pearson_euclidean | 0.8674 | | spearman_euclidean | 0.8737 | | pearson_dot | 0.8755 | | spearman_dot | 0.8737 | | pearson_max | 0.8755 | | spearman_max | 0.8737 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------|:------------------------------------------------------|:------------------| | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> | | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> | | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | sts-mpnet-dev_spearman_cosine | sts-mpnet-test_spearman_cosine | |:------:|:----:|:-------------:|:------:|:-----------------------------:|:------------------------------:| | 2.2222 | 100 | 0.015 | 0.0182 | 0.9033 | - | | 4.4444 | 200 | 0.0051 | 0.0181 | 0.9069 | - | | 5.0 | 225 | - | - | - | 0.8737 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.175 kWh - **Carbon Emitted**: 0.093 kg of CO2 - **Hours Used**: 0.143 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 8 x NVIDIA GeForce RTX 3090 - **CPU Model**: AMD EPYC 7H12 64-Core Processor - **RAM Size**: 229.15 GB ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.1+cu121 - Accelerate: 0.31.0 - Datasets: 2.19.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-mpnet-base-v2", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5749", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "A chef is preparing some food.", "sentences": ["Five birds stand on the snow.", "A chef prepared a meal.", "There is no 'still' that is not relative to some other object."]}, {"source_sentence": "A woman is adding oil on fishes.", "sentences": ["Large cruise ship floating on the water.", "It refers to the maximum f-stop (which is defined as the ratio of focal length to effective aperture diameter).", "The woman is cutting potatoes."]}, {"source_sentence": "The player shoots the winning points.", "sentences": ["Minimum wage laws hurt the least skilled, least productive the most.", "The basketball player is about to score points for his team.", "Three televisions, on on the floor, the other two on a box."]}, {"source_sentence": "Stars form in star-formation regions, which itself develop from molecular clouds.", "sentences": ["Although I believe Searle is mistaken, I don't think you have found the problem.", "It may be possible for a solar system like ours to exist outside of a galaxy.", "A blond-haired child performing on the trumpet in front of a house while his younger brother watches."]}, {"source_sentence": "While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.", "sentences": ["At first, I thought this is a bit of a tricky question.", "A man plays the guitar.", "There is a very good reason not to refer to the Queen's spouse as \"King\" - because they aren't the King."]}], "co2_eq_emissions": {"emissions": 93.43541854818658, "energy_consumed": 0.1749782643795525, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "AMD EPYC 7H12 64-Core Processor", "ram_total_size": 229.14864349365234, "hours_used": 0.143, "hardware_used": "8 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer based on sentence-transformers/all-mpnet-base-v2", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts mpnet dev", "type": "sts-mpnet-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.9082003751911006, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.9068573348364105, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.8991806648044287, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.905893937799947, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.8998094542860591, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.9068573340303147, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.9082003757684081, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.9068573340303147, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.9082003757684081, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.9068573348364105, "name": "Spearman Max"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts mpnet test", "type": "sts-mpnet-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.8755366875955196, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8736528954082163, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.8667069486775929, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.872611925172738, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.8673923632809148, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8736530053628437, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.8755366834981875, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.8736528954082163, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8755366875955196, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8736530053628437, "name": "Spearman Max"}]}]}]}
task
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY" ]
42,989
haryoaw/scenario-TCR-XLMV_data-cardiffnlp_tweet_sentiment_multilingual_all_delta2
haryoaw
text-classification
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:tweet_sentiment_multilingual", "base_model:facebook/xlm-v-base", "base_model:finetune:facebook/xlm-v-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-02T19:32:30Z
2024-04-02T19:34:46+00:00
9
0
--- base_model: facebook/xlm-v-base datasets: - tweet_sentiment_multilingual license: mit metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: scenario-TCR-XLMV_data-cardiffnlp_tweet_sentiment_multilingual_all_delta2 results: - task: type: text-classification name: Text Classification dataset: name: tweet_sentiment_multilingual type: tweet_sentiment_multilingual config: all split: validation args: all metrics: - type: accuracy value: 0.3333333333333333 name: Accuracy - type: f1 value: 0.16666666666666666 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scenario-TCR-XLMV_data-cardiffnlp_tweet_sentiment_multilingual_all_delta2 This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the tweet_sentiment_multilingual dataset. It achieves the following results on the evaluation set: - Loss: 1.0992 - Accuracy: 0.3333 - F1: 0.1667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1008 | 1.09 | 500 | 1.0991 | 0.3333 | 0.1667 | | 1.0996 | 2.17 | 1000 | 1.1007 | 0.3333 | 0.1667 | | 1.0996 | 3.26 | 1500 | 1.0995 | 0.3333 | 0.1667 | | 1.0999 | 4.35 | 2000 | 1.0987 | 0.3333 | 0.1667 | | 1.0997 | 5.43 | 2500 | 1.0990 | 0.3333 | 0.1667 | | 1.0994 | 6.52 | 3000 | 1.0992 | 0.3333 | 0.1667 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scenario-TCR-XLMV_data-cardiffnlp_tweet_sentiment_multilingual_all_delta2 This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the tweet_sentiment_multilingual dataset. It achieves the following results on the evaluation set: - Loss: 1.0992 - Accuracy: 0.3333 - F1: 0.1667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1008 | 1.09 | 500 | 1.0991 | 0.3333 | 0.1667 | | 1.0996 | 2.17 | 1000 | 1.1007 | 0.3333 | 0.1667 | | 1.0996 | 3.26 | 1500 | 1.0995 | 0.3333 | 0.1667 | | 1.0999 | 4.35 | 2000 | 1.0987 | 0.3333 | 0.1667 | | 1.0997 | 5.43 | 2500 | 1.0990 | 0.3333 | 0.1667 | | 1.0994 | 6.52 | 3000 | 1.0992 | 0.3333 | 0.1667 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3
{"base_model": "facebook/xlm-v-base", "datasets": ["tweet_sentiment_multilingual"], "license": "mit", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "scenario-TCR-XLMV_data-cardiffnlp_tweet_sentiment_multilingual_all_delta2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_sentiment_multilingual", "type": "tweet_sentiment_multilingual", "config": "all", "split": "validation", "args": "all"}, "metrics": [{"type": "accuracy", "value": 0.3333333333333333, "name": "Accuracy"}, {"type": "f1", "value": 0.16666666666666666, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,990
NW-temp/previous-best-my-awesome-setfit-model
NW-temp
text-classification
[ "sentence-transformers", "pytorch", "distilbert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-07-28T19:32:08Z
2023-12-26T21:00:22+00:00
28
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # NW-temp/my-awesome-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("NW-temp/my-awesome-setfit-model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# NW-temp/my-awesome-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("NW-temp/my-awesome-setfit-model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
42,991
elsayovita/bge-small-en-v1.5-esg
elsayovita
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:11863", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-small-en-v1.5", "base_model:finetune:BAAI/bge-small-en-v1.5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-09-03T10:31:09Z
2024-09-03T10:31:24+00:00
7
0
--- base_model: BAAI/bge-small-en-v1.5 datasets: [] language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:11863 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: In the fiscal year 2022, the emissions were categorized into different scopes, with each scope representing a specific source of emissions sentences: - 'Question: What is NetLink proactive in identifying to be more efficient in? ' - What standard is the Environment, Health, and Safety Management System (EHSMS) audited to by a third-party accredited certification body at the operational assets level of CLI? - What do the different scopes represent in terms of emissions in the fiscal year 2022? - source_sentence: NetLink is committed to protecting the security of all information and information systems, including both end-user data and corporate data. To this end, management ensures that the appropriate IT policies, personal data protection policy, risk mitigation strategies, cyber security programmes, systems, processes, and controls are in place to protect our IT systems and confidential data sentences: - '"What recognition did NetLink receive in FY22?"' - What measures does NetLink have in place to protect the security of all information and information systems, including end-user data and corporate data? - 'Question: What does Disclosure 102-10 discuss regarding the organization and its supply chain?' - source_sentence: In the domain of economic performance, the focus is on the financial health and growth of the organization, ensuring sustainable profitability and value creation for stakeholders sentences: - What does NetLink prioritize by investing in its network to ensure reliability and quality of infrastructure? - What percentage of the total energy was accounted for by heat, steam, and chilled water in 2021 according to the given information? - What is the focus in the domain of economic performance, ensuring sustainable profitability and value creation for stakeholders? - source_sentence: Disclosure 102-41 discusses collective bargaining agreements and is found on page 98 sentences: - What topic is discussed in Disclosure 102-41 on page 98 of the document? - What was the number of cases in 2021, following a decrease from 42 cases in 2020? - What type of data does GRI 101 provide in relation to connecting the nation? - source_sentence: Employee health and well-being has never been more topical than it was in the past year. We understand that people around the world, including our employees, have been increasingly exposed to factors affecting their physical and mental wellbeing. We are committed to creating an environment that supports our employees and ensures they feel valued and have a sense of belonging. We utilised sentences: - What aspect of the standard covers the evaluation of the management approach? - 'Question: What is the company''s commitment towards its employees'' health and well-being based on the provided context information?' - What types of skills does NetLink focus on developing through their training and development opportunities for employees? model-index: - name: BAAI BGE small en v1.5 ESG results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 384 type: dim_384 metrics: - type: cosine_accuracy@1 value: 0.7661637022675546 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9170530220011801 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9370311051167496 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9542274298238219 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7661637022675546 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.30568434066706 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.18740622102334994 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09542274298238222 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.021282325062987634 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.025473695055588344 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.026028641808798603 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.026506317495106176 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.19177581579273692 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.843606136995247 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.023463069757038203 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.7621175082188316 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9118266880215797 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9353451909297816 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9527944027648992 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7621175082188316 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3039422293405265 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.18706903818595635 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09527944027648994 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.02116993078385644 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.025328519111710558 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.025981810859160608 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.026466511187913874 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.19114210787645763 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8402866254821924 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.023374206451884923 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.7469442805361207 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.898423670235185 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9232066087836129 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9444491275394082 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7469442805361207 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2994745567450616 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1846413217567226 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09444491275394083 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.020748452237114468 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.02495621306208848 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.025644628021767035 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.02623469798720579 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.1883811701569402 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8264706590720244 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.02300099952981619 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.7106128298069628 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8668970749388856 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8978336002697462 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9243867487144904 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7106128298069628 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.28896569164629515 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17956672005394925 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09243867487144905 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.01973924527241564 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.02408047430385794 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.02493982222971518 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.02567740968651363 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.1818069773338387 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7936283816963235 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.022106633007589808 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 32 type: dim_32 metrics: - type: cosine_accuracy@1 value: 0.6166231138835033 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7788923543791622 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8194385905757396 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8608277838658013 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6166231138835033 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.259630784793054 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.16388771811514793 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08608277838658013 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.017128419830097316 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.02163589873275451 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.022762183071548335 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.02391188288516115 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.16371507022328244 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7058398528705336 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.019714839230632157 name: Cosine Map@100 --- # BAAI BGE small en v1.5 ESG This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("elsayovita/bge-small-en-v1.5-esg") # Run inference sentences = [ 'Employee health and well-being has never been more topical than it was in the past year. We understand that people around the world, including our employees, have been increasingly exposed to factors affecting their physical and mental wellbeing. We are committed to creating an environment that supports our employees and ensures they feel valued and have a sense of belonging. We utilised', "Question: What is the company's commitment towards its employees' health and well-being based on the provided context information?", 'What types of skills does NetLink focus on developing through their training and development opportunities for employees?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_384` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7662 | | cosine_accuracy@3 | 0.9171 | | cosine_accuracy@5 | 0.937 | | cosine_accuracy@10 | 0.9542 | | cosine_precision@1 | 0.7662 | | cosine_precision@3 | 0.3057 | | cosine_precision@5 | 0.1874 | | cosine_precision@10 | 0.0954 | | cosine_recall@1 | 0.0213 | | cosine_recall@3 | 0.0255 | | cosine_recall@5 | 0.026 | | cosine_recall@10 | 0.0265 | | cosine_ndcg@10 | 0.1918 | | cosine_mrr@10 | 0.8436 | | **cosine_map@100** | **0.0235** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7621 | | cosine_accuracy@3 | 0.9118 | | cosine_accuracy@5 | 0.9353 | | cosine_accuracy@10 | 0.9528 | | cosine_precision@1 | 0.7621 | | cosine_precision@3 | 0.3039 | | cosine_precision@5 | 0.1871 | | cosine_precision@10 | 0.0953 | | cosine_recall@1 | 0.0212 | | cosine_recall@3 | 0.0253 | | cosine_recall@5 | 0.026 | | cosine_recall@10 | 0.0265 | | cosine_ndcg@10 | 0.1911 | | cosine_mrr@10 | 0.8403 | | **cosine_map@100** | **0.0234** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.7469 | | cosine_accuracy@3 | 0.8984 | | cosine_accuracy@5 | 0.9232 | | cosine_accuracy@10 | 0.9444 | | cosine_precision@1 | 0.7469 | | cosine_precision@3 | 0.2995 | | cosine_precision@5 | 0.1846 | | cosine_precision@10 | 0.0944 | | cosine_recall@1 | 0.0207 | | cosine_recall@3 | 0.025 | | cosine_recall@5 | 0.0256 | | cosine_recall@10 | 0.0262 | | cosine_ndcg@10 | 0.1884 | | cosine_mrr@10 | 0.8265 | | **cosine_map@100** | **0.023** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7106 | | cosine_accuracy@3 | 0.8669 | | cosine_accuracy@5 | 0.8978 | | cosine_accuracy@10 | 0.9244 | | cosine_precision@1 | 0.7106 | | cosine_precision@3 | 0.289 | | cosine_precision@5 | 0.1796 | | cosine_precision@10 | 0.0924 | | cosine_recall@1 | 0.0197 | | cosine_recall@3 | 0.0241 | | cosine_recall@5 | 0.0249 | | cosine_recall@10 | 0.0257 | | cosine_ndcg@10 | 0.1818 | | cosine_mrr@10 | 0.7936 | | **cosine_map@100** | **0.0221** | #### Information Retrieval * Dataset: `dim_32` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6166 | | cosine_accuracy@3 | 0.7789 | | cosine_accuracy@5 | 0.8194 | | cosine_accuracy@10 | 0.8608 | | cosine_precision@1 | 0.6166 | | cosine_precision@3 | 0.2596 | | cosine_precision@5 | 0.1639 | | cosine_precision@10 | 0.0861 | | cosine_recall@1 | 0.0171 | | cosine_recall@3 | 0.0216 | | cosine_recall@5 | 0.0228 | | cosine_recall@10 | 0.0239 | | cosine_ndcg@10 | 0.1637 | | cosine_mrr@10 | 0.7058 | | **cosine_map@100** | **0.0197** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 11,863 training samples * Columns: <code>context</code> and <code>question</code> * Approximate statistics based on the first 1000 samples: | | context | question | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 40.74 tokens</li><li>max: 277 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 24.4 tokens</li><li>max: 62 tokens</li></ul> | * Samples: | context | question | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>The engagement with key stakeholders involves various topics and methods throughout the year</code> | <code>Question: What does the engagement with key stakeholders involve throughout the year?</code> | | <code>For unitholders and analysts, the focus is on business and operations, the release of financial results, and the overall performance and announcements</code> | <code>Question: What is the focus for unitholders and analysts in terms of business and operations, financial results, performance, and announcements?</code> | | <code>These are communicated through press releases and other required disclosures via SGXNet and NetLink's website</code> | <code>What platform is used to communicate press releases and required disclosures for NetLink?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_32_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 | |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|:---------------------:| | 0.4313 | 10 | 4.3426 | - | - | - | - | - | | 0.8625 | 20 | 2.7083 | - | - | - | - | - | | 1.0350 | 24 | - | 0.0229 | 0.0233 | 0.0195 | 0.0234 | 0.0220 | | 1.2264 | 30 | 2.6835 | - | - | - | - | - | | 1.6577 | 40 | 2.1702 | - | - | - | - | - | | **1.9164** | **46** | **-** | **0.023** | **0.0234** | **0.0197** | **0.0235** | **0.0221** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.4.0+cu121 - Accelerate: 0.32.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# BAAI BGE small en v1.5 ESG This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("elsayovita/bge-small-en-v1.5-esg") # Run inference sentences = [ 'Employee health and well-being has never been more topical than it was in the past year. We understand that people around the world, including our employees, have been increasingly exposed to factors affecting their physical and mental wellbeing. We are committed to creating an environment that supports our employees and ensures they feel valued and have a sense of belonging. We utilised', "Question: What is the company's commitment towards its employees' health and well-being based on the provided context information?", 'What types of skills does NetLink focus on developing through their training and development opportunities for employees?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_384` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7662 | | cosine_accuracy@3 | 0.9171 | | cosine_accuracy@5 | 0.937 | | cosine_accuracy@10 | 0.9542 | | cosine_precision@1 | 0.7662 | | cosine_precision@3 | 0.3057 | | cosine_precision@5 | 0.1874 | | cosine_precision@10 | 0.0954 | | cosine_recall@1 | 0.0213 | | cosine_recall@3 | 0.0255 | | cosine_recall@5 | 0.026 | | cosine_recall@10 | 0.0265 | | cosine_ndcg@10 | 0.1918 | | cosine_mrr@10 | 0.8436 | | **cosine_map@100** | **0.0235** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7621 | | cosine_accuracy@3 | 0.9118 | | cosine_accuracy@5 | 0.9353 | | cosine_accuracy@10 | 0.9528 | | cosine_precision@1 | 0.7621 | | cosine_precision@3 | 0.3039 | | cosine_precision@5 | 0.1871 | | cosine_precision@10 | 0.0953 | | cosine_recall@1 | 0.0212 | | cosine_recall@3 | 0.0253 | | cosine_recall@5 | 0.026 | | cosine_recall@10 | 0.0265 | | cosine_ndcg@10 | 0.1911 | | cosine_mrr@10 | 0.8403 | | **cosine_map@100** | **0.0234** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.7469 | | cosine_accuracy@3 | 0.8984 | | cosine_accuracy@5 | 0.9232 | | cosine_accuracy@10 | 0.9444 | | cosine_precision@1 | 0.7469 | | cosine_precision@3 | 0.2995 | | cosine_precision@5 | 0.1846 | | cosine_precision@10 | 0.0944 | | cosine_recall@1 | 0.0207 | | cosine_recall@3 | 0.025 | | cosine_recall@5 | 0.0256 | | cosine_recall@10 | 0.0262 | | cosine_ndcg@10 | 0.1884 | | cosine_mrr@10 | 0.8265 | | **cosine_map@100** | **0.023** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7106 | | cosine_accuracy@3 | 0.8669 | | cosine_accuracy@5 | 0.8978 | | cosine_accuracy@10 | 0.9244 | | cosine_precision@1 | 0.7106 | | cosine_precision@3 | 0.289 | | cosine_precision@5 | 0.1796 | | cosine_precision@10 | 0.0924 | | cosine_recall@1 | 0.0197 | | cosine_recall@3 | 0.0241 | | cosine_recall@5 | 0.0249 | | cosine_recall@10 | 0.0257 | | cosine_ndcg@10 | 0.1818 | | cosine_mrr@10 | 0.7936 | | **cosine_map@100** | **0.0221** | #### Information Retrieval * Dataset: `dim_32` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6166 | | cosine_accuracy@3 | 0.7789 | | cosine_accuracy@5 | 0.8194 | | cosine_accuracy@10 | 0.8608 | | cosine_precision@1 | 0.6166 | | cosine_precision@3 | 0.2596 | | cosine_precision@5 | 0.1639 | | cosine_precision@10 | 0.0861 | | cosine_recall@1 | 0.0171 | | cosine_recall@3 | 0.0216 | | cosine_recall@5 | 0.0228 | | cosine_recall@10 | 0.0239 | | cosine_ndcg@10 | 0.1637 | | cosine_mrr@10 | 0.7058 | | **cosine_map@100** | **0.0197** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 11,863 training samples * Columns: <code>context</code> and <code>question</code> * Approximate statistics based on the first 1000 samples: | | context | question | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 40.74 tokens</li><li>max: 277 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 24.4 tokens</li><li>max: 62 tokens</li></ul> | * Samples: | context | question | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>The engagement with key stakeholders involves various topics and methods throughout the year</code> | <code>Question: What does the engagement with key stakeholders involve throughout the year?</code> | | <code>For unitholders and analysts, the focus is on business and operations, the release of financial results, and the overall performance and announcements</code> | <code>Question: What is the focus for unitholders and analysts in terms of business and operations, financial results, performance, and announcements?</code> | | <code>These are communicated through press releases and other required disclosures via SGXNet and NetLink's website</code> | <code>What platform is used to communicate press releases and required disclosures for NetLink?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_32_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 | |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|:---------------------:| | 0.4313 | 10 | 4.3426 | - | - | - | - | - | | 0.8625 | 20 | 2.7083 | - | - | - | - | - | | 1.0350 | 24 | - | 0.0229 | 0.0233 | 0.0195 | 0.0234 | 0.0220 | | 1.2264 | 30 | 2.6835 | - | - | - | - | - | | 1.6577 | 40 | 2.1702 | - | - | - | - | - | | **1.9164** | **46** | **-** | **0.023** | **0.0234** | **0.0197** | **0.0235** | **0.0221** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.4.0+cu121 - Accelerate: 0.32.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-small-en-v1.5", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:11863", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "In the fiscal year 2022, the emissions were categorized into different scopes, with each scope representing a specific source of emissions", "sentences": ["Question: What is NetLink proactive in identifying to be more efficient in? ", "What standard is the Environment, Health, and Safety Management System (EHSMS) audited to by a third-party accredited certification body at the operational assets level of CLI?", "What do the different scopes represent in terms of emissions in the fiscal year 2022?"]}, {"source_sentence": "NetLink is committed to protecting the security of all information and information systems, including both end-user data and corporate data. To this end, management ensures that the appropriate IT policies, personal data protection policy, risk mitigation strategies, cyber security programmes, systems, processes, and controls are in place to protect our IT systems and confidential data", "sentences": ["\"What recognition did NetLink receive in FY22?\"", "What measures does NetLink have in place to protect the security of all information and information systems, including end-user data and corporate data?", "Question: What does Disclosure 102-10 discuss regarding the organization and its supply chain?"]}, {"source_sentence": "In the domain of economic performance, the focus is on the financial health and growth of the organization, ensuring sustainable profitability and value creation for stakeholders", "sentences": ["What does NetLink prioritize by investing in its network to ensure reliability and quality of infrastructure?", "What percentage of the total energy was accounted for by heat, steam, and chilled water in 2021 according to the given information?", "What is the focus in the domain of economic performance, ensuring sustainable profitability and value creation for stakeholders?"]}, {"source_sentence": "Disclosure 102-41 discusses collective bargaining agreements and is found on page 98", "sentences": ["What topic is discussed in Disclosure 102-41 on page 98 of the document?", "What was the number of cases in 2021, following a decrease from 42 cases in 2020?", "What type of data does GRI 101 provide in relation to connecting the nation?"]}, {"source_sentence": "Employee health and well-being has never been more topical than it was in the past year. We understand that people around the world, including our employees, have been increasingly exposed to factors affecting their physical and mental wellbeing. We are committed to creating an environment that supports our employees and ensures they feel valued and have a sense of belonging. We utilised", "sentences": ["What aspect of the standard covers the evaluation of the management approach?", "Question: What is the company's commitment towards its employees' health and well-being based on the provided context information?", "What types of skills does NetLink focus on developing through their training and development opportunities for employees?"]}], "model-index": [{"name": "BAAI BGE small en v1.5 ESG", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 384", "type": "dim_384"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7661637022675546, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9170530220011801, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9370311051167496, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9542274298238219, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7661637022675546, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.30568434066706, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.18740622102334994, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09542274298238222, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.021282325062987634, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.025473695055588344, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.026028641808798603, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.026506317495106176, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.19177581579273692, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.843606136995247, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.023463069757038203, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7621175082188316, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9118266880215797, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9353451909297816, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9527944027648992, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7621175082188316, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3039422293405265, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.18706903818595635, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09527944027648994, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.02116993078385644, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.025328519111710558, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.025981810859160608, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.026466511187913874, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.19114210787645763, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.8402866254821924, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.023374206451884923, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7469442805361207, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.898423670235185, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9232066087836129, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9444491275394082, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7469442805361207, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2994745567450616, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1846413217567226, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09444491275394083, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.020748452237114468, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.02495621306208848, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.025644628021767035, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.02623469798720579, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.1883811701569402, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.8264706590720244, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.02300099952981619, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7106128298069628, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8668970749388856, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8978336002697462, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9243867487144904, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7106128298069628, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.28896569164629515, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17956672005394925, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09243867487144905, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.01973924527241564, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.02408047430385794, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.02493982222971518, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.02567740968651363, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.1818069773338387, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7936283816963235, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.022106633007589808, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 32", "type": "dim_32"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6166231138835033, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7788923543791622, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8194385905757396, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8608277838658013, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6166231138835033, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.259630784793054, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.16388771811514793, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08608277838658013, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.017128419830097316, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.02163589873275451, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.022762183071548335, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.02391188288516115, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.16371507022328244, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7058398528705336, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.019714839230632157, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,992
airKlizz/mt5-base-wikinewssum-italian
airKlizz
summarization
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-12-29T10:55:47+00:00
30
0
--- license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: mt5-base-wikinewssum-italian results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-wikinewssum-italian This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 10.5739 - Rouge1: 2.1728 - Rouge2: 0.1516 - Rougel: 2.0846 - Rougelsum: 2.0515 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 8 | 16.6193 | 2.4011 | 0.3829 | 2.1505 | 2.2161 | | No log | 2.0 | 16 | 15.8909 | 2.5165 | 0.2799 | 2.3403 | 2.3523 | | No log | 3.0 | 24 | 15.4843 | 2.2794 | 0.2252 | 2.1849 | 2.1382 | | 17.2559 | 4.0 | 32 | 13.0850 | 2.2448 | 0.1516 | 2.1426 | 2.0859 | | 17.2559 | 5.0 | 40 | 11.7838 | 2.2448 | 0.1516 | 2.1426 | 2.0859 | | 17.2559 | 6.0 | 48 | 11.3207 | 2.2424 | 0.1516 | 2.1423 | 2.1171 | | 17.2559 | 7.0 | 56 | 10.7871 | 2.1081 | 0.1516 | 2.0227 | 1.9838 | | 14.6026 | 8.0 | 64 | 10.5739 | 2.1728 | 0.1516 | 2.0846 | 2.0515 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.1 - Datasets 1.16.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-wikinewssum-italian This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 10.5739 - Rouge1: 2.1728 - Rouge2: 0.1516 - Rougel: 2.0846 - Rougelsum: 2.0515 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 8 | 16.6193 | 2.4011 | 0.3829 | 2.1505 | 2.2161 | | No log | 2.0 | 16 | 15.8909 | 2.5165 | 0.2799 | 2.3403 | 2.3523 | | No log | 3.0 | 24 | 15.4843 | 2.2794 | 0.2252 | 2.1849 | 2.1382 | | 17.2559 | 4.0 | 32 | 13.0850 | 2.2448 | 0.1516 | 2.1426 | 2.0859 | | 17.2559 | 5.0 | 40 | 11.7838 | 2.2448 | 0.1516 | 2.1426 | 2.0859 | | 17.2559 | 6.0 | 48 | 11.3207 | 2.2424 | 0.1516 | 2.1423 | 2.1171 | | 17.2559 | 7.0 | 56 | 10.7871 | 2.1081 | 0.1516 | 2.0227 | 1.9838 | | 14.6026 | 8.0 | 64 | 10.5739 | 2.1728 | 0.1516 | 2.0846 | 2.0515 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-base-wikinewssum-italian", "results": []}]}
task
[ "SUMMARIZATION" ]
42,993
VMware/minilmv2-l6-h768-from-roberta-large-mrqa
VMware
question-answering
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "en", "dataset:mrqa", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
2023-02-17T21:45:22Z
2025-02-06T22:52:50+00:00
29
0
--- datasets: - mrqa language: - en license: apache-2.0 metrics: - exact_match - f1 model-index: - name: VMware/minilmv2-l6-h768-from-roberta-large-mrqa results: - task: type: Question-Answering dataset: name: MRQA type: mrqa metrics: - type: exact_match value: 65.8 name: Eval EM - type: f1 value: 77.17 name: Eval F1 - type: exact_match value: 51.72 name: Test EM - type: f1 value: 63.27 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** MiniLMv2-L6-H768-from-RoBERTa-Large-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [MiniLMv2-L6-H768-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 4:39:02 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/minilmv2-l6-h768-from-roberta-large-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.8072658181190491, # 'start': 30, # 'end': 68, # 'answer': 'Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "minilmv2-l6-h768-from-roberta-large-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=16, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
null
Non_BioNLP
This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** MiniLMv2-L6-H768-from-RoBERTa-Large-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [MiniLMv2-L6-H768-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 4:39:02 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/minilmv2-l6-h768-from-roberta-large-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.8072658181190491, # 'start': 30, # 'end': 68, # 'answer': 'Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "minilmv2-l6-h768-from-roberta-large-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=16, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
{"datasets": ["mrqa"], "language": ["en"], "license": "apache-2.0", "metrics": ["exact_match", "f1"], "model-index": [{"name": "VMware/minilmv2-l6-h768-from-roberta-large-mrqa", "results": [{"task": {"type": "Question-Answering"}, "dataset": {"name": "MRQA", "type": "mrqa"}, "metrics": [{"type": "exact_match", "value": 65.8, "name": "Eval EM"}, {"type": "f1", "value": 77.17, "name": "Eval F1"}, {"type": "exact_match", "value": 51.72, "name": "Test EM"}, {"type": "f1", "value": 63.27, "name": "Test F1"}]}]}]}
task
[ "QUESTION_ANSWERING" ]
42,994
skothale/marian-finetuned-kde4-en-to-fr
skothale
translation
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-12-03T06:42:02Z
2024-12-04T05:16:44+00:00
7
0
--- base_model: Helsinki-NLP/opus-mt-en-fr datasets: - kde4 library_name: transformers license: apache-2.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - type: bleu value: 52.90204973205105 name: Bleu --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8554 - Model Preparation Time: 0.0214 - Bleu: 52.9020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8554 - Model Preparation Time: 0.0214 - Bleu: 52.9020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-fr", "split": "train", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.90204973205105, "name": "Bleu"}]}]}]}
task
[ "TRANSLATION" ]
42,995
context-labs/meta-llama-Llama-3.2-1B-Instruct-FP16
context-labs
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "arxiv:2405.16406", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2025-02-21T18:57:39Z
2025-02-21T19:19:37+00:00
22
0
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display “Built with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include “Llama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a “Notice” text file distributed as a part of such copies: “Llama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software “bug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
null
Non_BioNLP
## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\n“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\n“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use. \ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law\n 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:\n 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled substances\n 11. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:\n 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 16. Generating, promoting, or further distributing spam\n 17. Impersonating another individual without consent, authorization, or legal right\n 18. Representing that the use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.\n\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
task
[ "SUMMARIZATION" ]
42,996
huanvo88/marian-finetuned-kde4-en-to-vi
huanvo88
translation
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-04-09T15:36:57Z
2023-04-09T17:33:05+00:00
10
0
--- datasets: - kde4 license: apache-2.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-kde4-en-to-vi results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: kde4 type: kde4 config: en-vi split: train args: en-vi metrics: - type: bleu value: 40.28564516132024 name: Bleu --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-vi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 1.2129 - Bleu: 40.2856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.1+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-vi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 1.2129 - Bleu: 40.2856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.1+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
{"datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-vi", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-vi", "split": "train", "args": "en-vi"}, "metrics": [{"type": "bleu", "value": 40.28564516132024, "name": "Bleu"}]}]}]}
task
[ "TRANSLATION" ]
42,997
google/paligemma-3b-ft-infovqa-896
google
image-text-to-text
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "arxiv:2310.09199", "arxiv:2303.15343", "arxiv:2403.08295", "arxiv:1706.03762", "arxiv:2010.11929", "arxiv:2209.06794", "arxiv:2209.04372", "arxiv:2103.01913", "arxiv:2401.06209", "arxiv:2305.10355", "arxiv:2205.12522", "arxiv:2110.11624", "arxiv:2108.03353", "arxiv:2010.04295", "arxiv:2203.10244", "arxiv:1810.12440", "arxiv:1905.13648", "arxiv:1608.00272", "arxiv:1908.04913", "arxiv:2407.07726", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-05-13T00:24:38Z
2024-07-19T12:10:10+00:00
40
0
--- library_name: transformers license: gemma pipeline_tag: image-text-to-text extra_gated_heading: Access PaliGemma on Hugging Face extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # PaliGemma model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) Transformers PaliGemma 3B weights, fine-tuned with 896*896 input images on the <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic_VQA</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/infovqa.py">big_vision</a>. **Resources and technical documentation:** * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma) * [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363) **Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-infovqa-896) **Authors:** Google ## Model information ### Model summary #### Description PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma language model](https://arxiv.org/abs/2403.08295). It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion params. The text decoder is initialized from [Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). PaliGemma is trained following the PaLI-3 recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. ### Model data #### Pre-train datasets PaliGemma is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma on clean data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## How to Use PaliGemma is a single-turn vision language model not meant for conversational use, and it works best when fine-tuning to a specific use case. You can configure which task the model will solve by conditioning it with task prefixes, such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue them with a rich set of capabilities (question answering, captioning, segmentation, etc.). However, they are not designed to be used directly, but to be transferred (by fine-tuning) to specific tasks using a similar prompt structure. For interactive testing, you can use the "mix" family of models, which have been fine-tuned on a mixture of tasks. Please, refer to the [usage and limitations section](#usage-and-limitations) for intended use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for additional details and examples. ## Use in Transformers The following snippets use model `google/paligemma-3b-mix-224` for reference purposes. The model in this repo you are now browsing may have been trained for other tasks, please make sure you use appropriate inputs for the task at hand. ### Running the default precision (`float32`) on CPU ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt") input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` Output: `Un auto azul estacionado frente a un edificio.` ### Running other precisions on CUDA For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`, so you can use them to reduce the download size and avoid casting on your local computer. This is how you'd run `bfloat16` on an nvidia CUDA card. ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" device = "cuda:0" dtype = torch.bfloat16 url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) model = PaliGemmaForConditionalGeneration.from_pretrained( model_id, torch_dtype=dtype, device_map=device, revision="bfloat16", ).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ### Loading in 4-bit / 8-bit You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision: ``` pip install bitsandbytes accelerate ``` ``` from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" device = "cuda:0" dtype = torch.bfloat16 url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = PaliGemmaForConditionalGeneration.from_pretrained( model_id, quantization_config=quantization_config ).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ## Implementation information ### Hardware PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### Mix model (fine-tune on mixture of transfer tasks) <table> <tbody><tr> <th>Benchmark</th> <th>Metric (split)</th> <th>mix-224</th> <th>mix-448</th> </tr> <tr> <td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td> <td>Paired Accuracy</td> <td>46.00</td> <td>45.33</td> </tr> <tr> <td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td> <td>Accuracy<br>(random/popular/adversarial)</td> <td> 88.00<br> 86.63<br> 85.67 </td> <td> 89.37<br> 88.40<br> 87.47 </td> </tr> <tr> <td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td> <td>Accuracy (test)</td> <td>65.20</td> <td>65.47</td> </tr> </tbody></table> #### Single task (fine-tune on single task) <table> <tbody><tr> <th>Benchmark<br>(train split)</th> <th>Metric<br>(split)</th> <th>pt-224</th> <th>pt-448</th> <th>pt-896</th> </tr> <tr> <th>Captioning</th> </tr> <tr> <td> <a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval) </td> <td>CIDEr (val)</td> <td>141.92</td> <td>144.60</td> </tr> <tr> <td> <a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer) </td> <td>CIDEr (val)</td> <td>121.72</td> <td>123.58</td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 139.2<br> 115.8<br> 116.4 </td> <td> 141.2<br> 118.0<br> 118.6 </td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 78.1<br> 41.3<br> 42.4 </td> <td> 80.0<br> 41.9<br> 42.9 </td> </tr> <tr> <td> <a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train) </td> <td>CIDEr (val)</td> <td>127.48</td> <td>153.94</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val) </td> <td>CIDEr/BLEU-4<br>(test)</td> <td> 162.25<br> 0.192<br> </td> <td> 181.49<br> 0.211<br> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>117.57</td> <td>119.59</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>136.07</td> <td>148.36</td> </tr> <tr> <th>Question answering</th> </tr> <tr> <td> <a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation) </td> <td>Accuracy<br>(Test server - std)</td> <td>83.19</td> <td>85.64</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer) </td> <td>Paired Accuracy</td> <td>47.33</td> <td>45.33</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer) </td> <td>Accuracy<br>(random/popular/<br>adversarial)</td> <td> 87.80<br> 85.87<br> 84.27 </td> <td> 88.23<br> 86.77<br> 85.90 </td> </tr> <tr> <td> <a href="https://okvqa.allenai.org/">OKVQA</a><br>(train) </td> <td>Accuracy (val)</td> <td>63.54</td> <td>63.15</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>76.37</td> <td>76.90</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>61.85</td> <td>63.22</td> </tr> <tr> <td> <a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced) </td> <td>Accuracy<br>(testdev balanced)</td> <td>65.61</td> <td>67.03</td> </tr> <tr> <td> <a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer) </td> <td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td> <td>58.37</td> <td>59.07</td> </tr> <tr> <td> <a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev) </td> <td>Accuracy (test)</td> <td>90.02</td> <td>88.93</td> </tr> <tr> <td> <a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer) </td> <td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td> <td>80.57</td> <td>76.78</td> </tr> <tr> <td> <a href="https://allenai.org/data/diagrams">AI2D</a><br>(train) </td> <td>Accuracy (test)</td> <td>72.12</td> <td>73.28</td> </tr> <tr> <td> <a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val) </td> <td>Accuracy (test)</td> <td>95.39</td> <td>95.93</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test)</td> <td>92.65</td> <td>93.11</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test/test2)</td> <td> 92.61<br> 90.58 </td> <td> 92.79<br> 90.54 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val) </td> <td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td> <td>57.08</td> <td>71.36</td> </tr> <tr> <td> <a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td> 73.7 </td> <td> 75.52 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train) </td> <td>Accuracy<br>(test_simple/<br>test_complex)</td> <td> 81.72<br> 69.56 </td> <td> 84.86<br> 72.27 </td> </tr> <tr> <td> <a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val) </td> <td>Accuracy (test)</td> <td>72.32</td> <td>74.61</td> <td>74.93</td> </tr> <tr> <td> <a href="https://textvqa.org/">TextVQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td>55.47</td> <td>73.15</td> <td>76.48</td> </tr> <tr> <td> <a href="https://www.docvqa.org/">DocVQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>43.74</td> <td>78.02</td> <td>84.77</td> </tr> <tr> <td> <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>28.46</td> <td>40.47</td> <td>47.75</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>63.29</td> <td>81.82</td> <td>84.40</td> </tr> <tr> <th>Segmentation</th> </tr> <tr> <td> <a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images) </td> <td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td> <td> 73.40<br> 68.32<br> 67.65 </td> <td> 75.57<br> 69.76<br> 70.17 </td> <td> 76.94<br> 72.18<br> 72.22 </td> </tr> <tr> <th>Video tasks (Caption/QA)</th> </tr> <tr> <td>MSR-VTT (Captioning)</td> <td>CIDEr (test)</td> <td>70.54</td> </tr> <tr> <td>MSR-VTT (QA)</td> <td>Accuracy (test)</td> <td>50.09</td> </tr> <tr> <td>ActivityNet (Captioning)</td> <td>CIDEr (test)</td> <td>34.62</td> </tr> <tr> <td>ActivityNet (QA)</td> <td>Accuracy (test)</td> <td>50.78</td> </tr> <tr> <td>VATEX (Captioning)</td> <td>CIDEr (test)</td> <td>79.73</td> </tr> <tr> <td>MSVD (QA)</td> <td>Accuracy (test)</td> <td>60.22</td> </tr> </tbody></table> ## Ethics and safety ### Evaluation approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tbody><tr> </tr></tbody><tbody><tr><th>Metric</th> <th>Perceived<br>gender</th> <th></th> <th>Ethnicity</th> <th></th> <th>Age group</th> <th></th> </tr> <tr> <th></th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> </tr> <tr> <td>Toxicity</td> <td>0.04%</td> <td>0.03%</td> <td>0.08%</td> <td>0.00%</td> <td>0.09%</td> <td>0.00%</td> </tr> <tr> <td>Identity Attack</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.06%</td> <td>0.04%</td> <td>0.09%</td> <td>0.07%</td> <td>0.16%</td> <td>0.00%</td> </tr> <tr> <td>Threat</td> <td>0.06%</td> <td>0.05%</td> <td>0.14%</td> <td>0.05%</td> <td>0.17%</td> <td>0.00%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </tbody></table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma model still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that. * PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input. ## Citation ```bibtex @article{beyer2024paligemma, title={{PaliGemma: A versatile 3B VLM for transfer}}, author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*}, year={2024}, journal={arXiv preprint arXiv:2407.07726} } ``` Find the paper [here](https://arxiv.org/abs/2407.07726).
null
Non_BioNLP
# PaliGemma model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) Transformers PaliGemma 3B weights, fine-tuned with 896*896 input images on the <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic_VQA</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/infovqa.py">big_vision</a>. **Resources and technical documentation:** * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma) * [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363) **Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-infovqa-896) **Authors:** Google ## Model information ### Model summary #### Description PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma language model](https://arxiv.org/abs/2403.08295). It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion params. The text decoder is initialized from [Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). PaliGemma is trained following the PaLI-3 recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. ### Model data #### Pre-train datasets PaliGemma is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma on clean data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## How to Use PaliGemma is a single-turn vision language model not meant for conversational use, and it works best when fine-tuning to a specific use case. You can configure which task the model will solve by conditioning it with task prefixes, such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue them with a rich set of capabilities (question answering, captioning, segmentation, etc.). However, they are not designed to be used directly, but to be transferred (by fine-tuning) to specific tasks using a similar prompt structure. For interactive testing, you can use the "mix" family of models, which have been fine-tuned on a mixture of tasks. Please, refer to the [usage and limitations section](#usage-and-limitations) for intended use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for additional details and examples. ## Use in Transformers The following snippets use model `google/paligemma-3b-mix-224` for reference purposes. The model in this repo you are now browsing may have been trained for other tasks, please make sure you use appropriate inputs for the task at hand. ### Running the default precision (`float32`) on CPU ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt") input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` Output: `Un auto azul estacionado frente a un edificio.` ### Running other precisions on CUDA For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`, so you can use them to reduce the download size and avoid casting on your local computer. This is how you'd run `bfloat16` on an nvidia CUDA card. ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" device = "cuda:0" dtype = torch.bfloat16 url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) model = PaliGemmaForConditionalGeneration.from_pretrained( model_id, torch_dtype=dtype, device_map=device, revision="bfloat16", ).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ### Loading in 4-bit / 8-bit You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision: ``` pip install bitsandbytes accelerate ``` ``` from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" device = "cuda:0" dtype = torch.bfloat16 url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = PaliGemmaForConditionalGeneration.from_pretrained( model_id, quantization_config=quantization_config ).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ## Implementation information ### Hardware PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### Mix model (fine-tune on mixture of transfer tasks) <table> <tbody><tr> <th>Benchmark</th> <th>Metric (split)</th> <th>mix-224</th> <th>mix-448</th> </tr> <tr> <td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td> <td>Paired Accuracy</td> <td>46.00</td> <td>45.33</td> </tr> <tr> <td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td> <td>Accuracy<br>(random/popular/adversarial)</td> <td> 88.00<br> 86.63<br> 85.67 </td> <td> 89.37<br> 88.40<br> 87.47 </td> </tr> <tr> <td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td> <td>Accuracy (test)</td> <td>65.20</td> <td>65.47</td> </tr> </tbody></table> #### Single task (fine-tune on single task) <table> <tbody><tr> <th>Benchmark<br>(train split)</th> <th>Metric<br>(split)</th> <th>pt-224</th> <th>pt-448</th> <th>pt-896</th> </tr> <tr> <th>Captioning</th> </tr> <tr> <td> <a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval) </td> <td>CIDEr (val)</td> <td>141.92</td> <td>144.60</td> </tr> <tr> <td> <a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer) </td> <td>CIDEr (val)</td> <td>121.72</td> <td>123.58</td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 139.2<br> 115.8<br> 116.4 </td> <td> 141.2<br> 118.0<br> 118.6 </td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 78.1<br> 41.3<br> 42.4 </td> <td> 80.0<br> 41.9<br> 42.9 </td> </tr> <tr> <td> <a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train) </td> <td>CIDEr (val)</td> <td>127.48</td> <td>153.94</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val) </td> <td>CIDEr/BLEU-4<br>(test)</td> <td> 162.25<br> 0.192<br> </td> <td> 181.49<br> 0.211<br> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>117.57</td> <td>119.59</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>136.07</td> <td>148.36</td> </tr> <tr> <th>Question answering</th> </tr> <tr> <td> <a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation) </td> <td>Accuracy<br>(Test server - std)</td> <td>83.19</td> <td>85.64</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer) </td> <td>Paired Accuracy</td> <td>47.33</td> <td>45.33</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer) </td> <td>Accuracy<br>(random/popular/<br>adversarial)</td> <td> 87.80<br> 85.87<br> 84.27 </td> <td> 88.23<br> 86.77<br> 85.90 </td> </tr> <tr> <td> <a href="https://okvqa.allenai.org/">OKVQA</a><br>(train) </td> <td>Accuracy (val)</td> <td>63.54</td> <td>63.15</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>76.37</td> <td>76.90</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>61.85</td> <td>63.22</td> </tr> <tr> <td> <a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced) </td> <td>Accuracy<br>(testdev balanced)</td> <td>65.61</td> <td>67.03</td> </tr> <tr> <td> <a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer) </td> <td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td> <td>58.37</td> <td>59.07</td> </tr> <tr> <td> <a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev) </td> <td>Accuracy (test)</td> <td>90.02</td> <td>88.93</td> </tr> <tr> <td> <a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer) </td> <td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td> <td>80.57</td> <td>76.78</td> </tr> <tr> <td> <a href="https://allenai.org/data/diagrams">AI2D</a><br>(train) </td> <td>Accuracy (test)</td> <td>72.12</td> <td>73.28</td> </tr> <tr> <td> <a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val) </td> <td>Accuracy (test)</td> <td>95.39</td> <td>95.93</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test)</td> <td>92.65</td> <td>93.11</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test/test2)</td> <td> 92.61<br> 90.58 </td> <td> 92.79<br> 90.54 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val) </td> <td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td> <td>57.08</td> <td>71.36</td> </tr> <tr> <td> <a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td> 73.7 </td> <td> 75.52 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train) </td> <td>Accuracy<br>(test_simple/<br>test_complex)</td> <td> 81.72<br> 69.56 </td> <td> 84.86<br> 72.27 </td> </tr> <tr> <td> <a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val) </td> <td>Accuracy (test)</td> <td>72.32</td> <td>74.61</td> <td>74.93</td> </tr> <tr> <td> <a href="https://textvqa.org/">TextVQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td>55.47</td> <td>73.15</td> <td>76.48</td> </tr> <tr> <td> <a href="https://www.docvqa.org/">DocVQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>43.74</td> <td>78.02</td> <td>84.77</td> </tr> <tr> <td> <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>28.46</td> <td>40.47</td> <td>47.75</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>63.29</td> <td>81.82</td> <td>84.40</td> </tr> <tr> <th>Segmentation</th> </tr> <tr> <td> <a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images) </td> <td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td> <td> 73.40<br> 68.32<br> 67.65 </td> <td> 75.57<br> 69.76<br> 70.17 </td> <td> 76.94<br> 72.18<br> 72.22 </td> </tr> <tr> <th>Video tasks (Caption/QA)</th> </tr> <tr> <td>MSR-VTT (Captioning)</td> <td>CIDEr (test)</td> <td>70.54</td> </tr> <tr> <td>MSR-VTT (QA)</td> <td>Accuracy (test)</td> <td>50.09</td> </tr> <tr> <td>ActivityNet (Captioning)</td> <td>CIDEr (test)</td> <td>34.62</td> </tr> <tr> <td>ActivityNet (QA)</td> <td>Accuracy (test)</td> <td>50.78</td> </tr> <tr> <td>VATEX (Captioning)</td> <td>CIDEr (test)</td> <td>79.73</td> </tr> <tr> <td>MSVD (QA)</td> <td>Accuracy (test)</td> <td>60.22</td> </tr> </tbody></table> ## Ethics and safety ### Evaluation approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tbody><tr> </tr></tbody><tbody><tr><th>Metric</th> <th>Perceived<br>gender</th> <th></th> <th>Ethnicity</th> <th></th> <th>Age group</th> <th></th> </tr> <tr> <th></th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> </tr> <tr> <td>Toxicity</td> <td>0.04%</td> <td>0.03%</td> <td>0.08%</td> <td>0.00%</td> <td>0.09%</td> <td>0.00%</td> </tr> <tr> <td>Identity Attack</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.06%</td> <td>0.04%</td> <td>0.09%</td> <td>0.07%</td> <td>0.16%</td> <td>0.00%</td> </tr> <tr> <td>Threat</td> <td>0.06%</td> <td>0.05%</td> <td>0.14%</td> <td>0.05%</td> <td>0.17%</td> <td>0.00%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </tbody></table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma model still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that. * PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input. ## Citation ```bibtex @article{beyer2024paligemma, title={{PaliGemma: A versatile 3B VLM for transfer}}, author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*}, year={2024}, journal={arXiv preprint arXiv:2407.07726} } ``` Find the paper [here](https://arxiv.org/abs/2407.07726).
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "image-text-to-text", "extra_gated_heading": "Access PaliGemma on Hugging Face", "extra_gated_prompt": "To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
42,998
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1280
Lots-of-LoRAs
null
[ "pytorch", "safetensors", "en", "arxiv:1910.09700", "arxiv:2407.00066", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:mit", "region:us" ]
2025-01-03T18:24:16Z
2025-01-03T18:24:22+00:00
0
0
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 language: en library_name: pytorch license: mit --- # Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1280 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task1280_ted_translation_pt_it - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task1280_ted_translation_pt_it sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1280 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task1280_ted_translation_pt_it - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task1280_ted_translation_pt_it sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
task
[ "TRANSLATION" ]
42,999
spacemanidol/flan-t5-large-6-1-cnndm
spacemanidol
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-03-06T20:31:08Z
2023-03-17T17:04:56+00:00
8
0
--- datasets: - cnn_dailymail metrics: - rouge tags: - generated_from_trainer model-index: - name: large-6-1-t results: - task: type: summarization name: Summarization dataset: name: cnn_dailymail 3.0.0 type: cnn_dailymail config: 3.0.0 split: validation args: 3.0.0 metrics: - type: rouge value: 41.4182 name: Rouge1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # large-6-1-t This model is a fine-tuned version of [6-1](https://huggingface.co/6-1) on the cnn_dailymail 3.0.0 dataset. It achieves the following results on the evaluation set: - Loss: 1.6639 - Rouge1: 41.4182 - Rouge2: 19.4871 - Rougel: 30.3528 - Rougelsum: 38.7818 - Gen Len: 70.3855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 30 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 60 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.1 - Pytorch 1.12.0+cu116 - Datasets 2.4.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # large-6-1-t This model is a fine-tuned version of [6-1](https://huggingface.co/6-1) on the cnn_dailymail 3.0.0 dataset. It achieves the following results on the evaluation set: - Loss: 1.6639 - Rouge1: 41.4182 - Rouge2: 19.4871 - Rougel: 30.3528 - Rougelsum: 38.7818 - Gen Len: 70.3855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 30 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 60 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.1 - Pytorch 1.12.0+cu116 - Datasets 2.4.0 - Tokenizers 0.13.2
{"datasets": ["cnn_dailymail"], "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "large-6-1-t", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "cnn_dailymail 3.0.0", "type": "cnn_dailymail", "config": "3.0.0", "split": "validation", "args": "3.0.0"}, "metrics": [{"type": "rouge", "value": 41.4182, "name": "Rouge1"}]}]}]}
task
[ "SUMMARIZATION" ]
43,000
BoghdadyJR/distilbert-base-uncased-finetuned-emotion
BoghdadyJR
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-29T13:06:40Z
2024-06-29T13:25:37+00:00
92
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.943 name: Accuracy - type: f1 value: 0.9428390727005334 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1829 - Accuracy: 0.943 - F1: 0.9428 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 2000 | 0.2299 | 0.929 | 0.9296 | | No log | 2.0 | 4000 | 0.1829 | 0.943 | 0.9428 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1829 - Accuracy: 0.943 - F1: 0.9428 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 2000 | 0.2299 | 0.929 | 0.9296 | | No log | 2.0 | 4000 | 0.1829 | 0.943 | 0.9428 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.943, "name": "Accuracy"}, {"type": "f1", "value": 0.9428390727005334, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,001
QuantFactory/LLaMAX3-8B-GGUF
QuantFactory
text-generation
[ "gguf", "Multilingual", "text-generation", "arxiv:2407.05975", "base_model:LLaMAX/LLaMAX3-8B", "base_model:quantized:LLaMAX/LLaMAX3-8B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
2024-07-12T06:02:40Z
2024-07-13T12:45:19+00:00
352
2
--- base_model: LLaMAX/LLaMAX3-8B license: mit pipeline_tag: text-generation tags: - Multilingual --- # QuantFactory/LLaMAX3-8B-GGUF This is quantized version of [LLaMAX/LLaMAX3-8B](https://huggingface.co/LLaMAX/LLaMAX3-8B) created using llama.cpp # Model Description ### Model Sources - **Paper**: LLaMAX: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages - **Link**: https://arxiv.org/pdf/2407.05975 - **Repository**: https://github.com/CONE-MT/LLaMAX/ ### Model Description LLaMAX3-8B is a multilingual language base model, developed through continued pre-training on Llama3, and supports over 100 languages. LLaMAX3-8B can serve as a base model to support downstream multilingual tasks but without instruct-following capability. We further fine-tune LLaMAX3-8B on Alpaca dataset to enhance its instruct-following capabilities. The model is available at https://huggingface.co/LLaMAX/LLaMAX3-8B-Alpaca. ### Supported Languages Akrikaans (af), Amharic (am), Arabic (ar), Armenian (hy), Assamese (as), Asturian (ast), Azerbaijani (az), Belarusian (be), Bengali (bn), Bosnian (bs), Bulgarian (bg), Burmese (my), Catalan (ca), Cebuano (ceb), Chinese Simpl (zho), Chinese Trad (zho), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Filipino (tl), Finnish (fi), French (fr), Fulah (ff), Galician (gl), Ganda (lg), Georgian (ka), German (de), Greek (el), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Hungarian (hu), Icelandic (is), Igbo (ig), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Javanese (jv), Kabuverdianu (kea), Kamba (kam), Kannada (kn), Kazakh (kk), Khmer (km), Korean (ko), Kyrgyz (ky), Lao (lo), Latvian (lv), Lingala (ln), Lithuanian (lt), Luo (luo), Luxembourgish (lb), Macedonian (mk), Malay (ms), Malayalam (ml), Maltese (mt), Maori (mi), Marathi (mr), Mongolian (mn), Nepali (ne), Northern Sotho (ns), Norwegian (no), Nyanja (ny), Occitan (oc), Oriya (or), Oromo (om), Pashto (ps), Persian (fa), Polish (pl), Portuguese (pt), Punjabi (pa), Romanian (ro), Russian (ru), Serbian (sr), Shona (sn), Sindhi (sd), Slovak (sk), Slovenian (sl), Somali (so), Sorani Kurdish (ku), Spanish (es), Swahili (sw), Swedish (sv), Tajik (tg), Tamil (ta), Telugu (te), Thai (th), Turkish (tr), Ukrainian (uk), Umbundu (umb), Urdu (ur), Uzbek (uz), Vietnamese (vi), Welsh (cy), Wolof (wo), Xhosa (xh), Yoruba (yo), Zulu (zu) ### Model Index | Model | LLaMAX | LLaMAX-Alpaca | |---------|----------------------------------------------------------|-----------------------------------------------------------------| | Llama-2 | [Link](https://huggingface.co/LLaMAX/LLaMAX2-7B) | [Link](https://huggingface.co/LLaMAX/LLaMAX2-7B-Alpaca) | | Llama-3 | [Link](https://huggingface.co/LLaMAX/LLaMAX3-8B) | [Link](https://huggingface.co/LLaMAX/LLaMAX3-8B-Alpaca) | ### Model Citation If our model helps your work, please cite this paper: ``` @misc{lu2024llamaxscalinglinguistichorizons, title={LLaMAX: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages}, author={Yinquan Lu and Wenhao Zhu and Lei Li and Yu Qiao and Fei Yuan}, year={2024}, eprint={2407.05975}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2407.05975}, } ```
null
Non_BioNLP
# QuantFactory/LLaMAX3-8B-GGUF This is quantized version of [LLaMAX/LLaMAX3-8B](https://huggingface.co/LLaMAX/LLaMAX3-8B) created using llama.cpp # Model Description ### Model Sources - **Paper**: LLaMAX: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages - **Link**: https://arxiv.org/pdf/2407.05975 - **Repository**: https://github.com/CONE-MT/LLaMAX/ ### Model Description LLaMAX3-8B is a multilingual language base model, developed through continued pre-training on Llama3, and supports over 100 languages. LLaMAX3-8B can serve as a base model to support downstream multilingual tasks but without instruct-following capability. We further fine-tune LLaMAX3-8B on Alpaca dataset to enhance its instruct-following capabilities. The model is available at https://huggingface.co/LLaMAX/LLaMAX3-8B-Alpaca. ### Supported Languages Akrikaans (af), Amharic (am), Arabic (ar), Armenian (hy), Assamese (as), Asturian (ast), Azerbaijani (az), Belarusian (be), Bengali (bn), Bosnian (bs), Bulgarian (bg), Burmese (my), Catalan (ca), Cebuano (ceb), Chinese Simpl (zho), Chinese Trad (zho), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Filipino (tl), Finnish (fi), French (fr), Fulah (ff), Galician (gl), Ganda (lg), Georgian (ka), German (de), Greek (el), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Hungarian (hu), Icelandic (is), Igbo (ig), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Javanese (jv), Kabuverdianu (kea), Kamba (kam), Kannada (kn), Kazakh (kk), Khmer (km), Korean (ko), Kyrgyz (ky), Lao (lo), Latvian (lv), Lingala (ln), Lithuanian (lt), Luo (luo), Luxembourgish (lb), Macedonian (mk), Malay (ms), Malayalam (ml), Maltese (mt), Maori (mi), Marathi (mr), Mongolian (mn), Nepali (ne), Northern Sotho (ns), Norwegian (no), Nyanja (ny), Occitan (oc), Oriya (or), Oromo (om), Pashto (ps), Persian (fa), Polish (pl), Portuguese (pt), Punjabi (pa), Romanian (ro), Russian (ru), Serbian (sr), Shona (sn), Sindhi (sd), Slovak (sk), Slovenian (sl), Somali (so), Sorani Kurdish (ku), Spanish (es), Swahili (sw), Swedish (sv), Tajik (tg), Tamil (ta), Telugu (te), Thai (th), Turkish (tr), Ukrainian (uk), Umbundu (umb), Urdu (ur), Uzbek (uz), Vietnamese (vi), Welsh (cy), Wolof (wo), Xhosa (xh), Yoruba (yo), Zulu (zu) ### Model Index | Model | LLaMAX | LLaMAX-Alpaca | |---------|----------------------------------------------------------|-----------------------------------------------------------------| | Llama-2 | [Link](https://huggingface.co/LLaMAX/LLaMAX2-7B) | [Link](https://huggingface.co/LLaMAX/LLaMAX2-7B-Alpaca) | | Llama-3 | [Link](https://huggingface.co/LLaMAX/LLaMAX3-8B) | [Link](https://huggingface.co/LLaMAX/LLaMAX3-8B-Alpaca) | ### Model Citation If our model helps your work, please cite this paper: ``` @misc{lu2024llamaxscalinglinguistichorizons, title={LLaMAX: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages}, author={Yinquan Lu and Wenhao Zhu and Lei Li and Yu Qiao and Fei Yuan}, year={2024}, eprint={2407.05975}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2407.05975}, } ```
{"base_model": "LLaMAX/LLaMAX3-8B", "license": "mit", "pipeline_tag": "text-generation", "tags": ["Multilingual"]}
task
[ "TRANSLATION" ]
43,002
mesolitica/t5-base-standard-bahasa-cased
mesolitica
feature-extraction
[ "transformers", "pytorch", "t5", "feature-extraction", "ms", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-10-06T15:23:59+00:00
145
0
--- language: ms --- # t5-base-standard-bahasa-cased Pretrained T5 base standard language model for Malay. ## Pretraining Corpus `t5-base-standard-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on, 1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile. 2. News title prediction on bahasa news. 3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile. 4. Translated QA Natural. 5. Text Similarity task on translated SNLI and translated MNLI. 6. EN-MS translation. 7. MS-EN translation. 8. Abstractive Summarization. 9. Knowledge Graph triples generation. 10. Paraphrase. Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare ## Pretraining details - This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU. - All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5 ## Supported prefix 1. `soalan: {string}`, trained using Natural QA. 2. `ringkasan: {string}`, for abstractive summarization. 3. `tajuk: {string}`, for abstractive title. 4. `parafrasa: {string}`, for abstractive paraphrase. 5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation. 6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation. 7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format. 8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
null
Non_BioNLP
# t5-base-standard-bahasa-cased Pretrained T5 base standard language model for Malay. ## Pretraining Corpus `t5-base-standard-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on, 1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile. 2. News title prediction on bahasa news. 3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile. 4. Translated QA Natural. 5. Text Similarity task on translated SNLI and translated MNLI. 6. EN-MS translation. 7. MS-EN translation. 8. Abstractive Summarization. 9. Knowledge Graph triples generation. 10. Paraphrase. Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare ## Pretraining details - This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU. - All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5 ## Supported prefix 1. `soalan: {string}`, trained using Natural QA. 2. `ringkasan: {string}`, for abstractive summarization. 3. `tajuk: {string}`, for abstractive title. 4. `parafrasa: {string}`, for abstractive paraphrase. 5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation. 6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation. 7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format. 8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
{"language": "ms"}
task
[ "SEMANTIC_SIMILARITY", "TRANSLATION", "SUMMARIZATION" ]
43,003
lightblue/suzume-llama-3-8B-multilingual-gguf
lightblue
null
[ "gguf", "generated_from_trainer", "arxiv:2405.12612", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "endpoints_compatible", "region:us", "conversational" ]
2024-04-23T03:01:18Z
2024-06-02T02:14:49+00:00
960
27
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct license: other license_name: llama-3 license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE tags: - generated_from_trainer model-index: - name: lightblue/suzume-llama-3-8B-multilingual results: [] --- <p align="center"> <img width=400 src="https://hf.fast360.xyz/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png" alt="Suzume - a Japanese tree sparrow"/> </p> # Suzume [[Paper](https://arxiv.org/abs/2405.12612)] [[Dataset](https://huggingface.co/datasets/lightblue/tagengo-gpt4)] This Suzume 8B, a multilingual finetune of Llama 3. Llama 3 has exhibited excellent performance on many English language benchmarks. However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in other languages. We have fine-tuned Llama 3 on more than 80,000 multilingual conversations meaning that this model has the smarts of Llama 3 but has the added ability to chat in more languages. Please feel free to comment on this model and give us feedback in the Community tab! # How to use The easiest way to use this model on your own computer is to use the [GGUF version of this model (lightblue/suzume-llama-3-8B-multilingual-gguf)](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf) using a program such as [jan.ai](https://jan.ai/) or [LM Studio](https://lmstudio.ai/). If you want to use this model directly in Python, we recommend using vLLM for the fastest inference speeds. ```python from vllm import LLM, SamplingParams sampling_params = SamplingParams(temperature=0.0, max_tokens=100) llm = LLM(model="lightblue/suzume-llama-3-8B-multilingual") messages = [] messages.append({"role": "user", "content": "Bonjour!"}) prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False) prompts = [prompt] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` # Evaluation scores We achieve the following MT-Bench scores across 6 languages: | | **meta-llama/Meta-Llama-3-8B-Instruct** | **lightblue/suzume-llama-3-8B-multilingual** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | |-----------------|-----------------------------------------|----------------------------------------------|-----------------------------------|-------------------| | **German** 🇩🇪 | NaN | 7.26 | 6.99 | 7.68 | | **French** 🇫🇷 | NaN | 7.66 | 7.29 | 7.74 | | **Japanese** 🇯🇵 | NaN | 6.56 | 6.22 | 7.84 | | **Russian** 🇷🇺 | NaN | 8.19 | 8.28 | 7.94 | | **Chinese** 🇨🇳 | NaN | 7.11 | 6.97 | 7.55 | | **English** 🇺🇸 | 7.98 | 7.73 | 7.92 | 8.26 | We observe minimal degredation of Llama 3's English ability while achieving best-in-class multilingual abilities compared to the top rated 7B model ([Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)) on the [Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard). [Here is our evaluation script.](https://drive.google.com/file/d/15HPn7452t8LbTD9HKSl7ngYYWnsoOG08/view?usp=sharing) # Training data We train on three sources of data to create this model: * [lightblue/tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4) - 76,338 conversations * A diverse dataset of initial inputs sampled from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) and then used to prompt `gpt-4-0125-preview` * [megagonlabs/instruction_ja](https://github.com/megagonlabs/instruction_ja) - 669 conversations * A hand-edited dataset of nearly 700 Japanese conversations taken originally from translations of the [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) dataset. * [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json) - 6,206 conversations * Multilingual conversations of humans talking to GPT-4. <details><summary>We prepare our data like so:</summary> ```python import pandas as pd from datasets import Dataset, load_dataset, concatenate_datasets ### Tagengo gpt4_dataset = load_dataset("lightblue/tagengo-gpt4", split="train") gpt4_dataset = gpt4_dataset.filter(lambda x: x["response"][1] == "stop") #### ### Megagon megagon_df = pd.read_json( "https://raw.githubusercontent.com/megagonlabs/instruction_ja/main/data/data.jsonl", lines=True, orient="records" ) role_map = {"user": "human", "agent": "gpt"} megagon_df["conversations"] = megagon_df.utterances.apply(lambda x: [{"from": role_map[y["name"]], "value": y["text"]} for y in x]) megagon_df["language"] = "Japanese" megagon_df = megagon_df[["conversations", "language"]] megagon_dataset = Dataset.from_pandas(df) ### ### Openchat openchat_df = pd.read_json("https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json?download=true") openchat_df["conversations"] = openchat_df["items"] openchat_dataset = Dataset.from_pandas(openchat_df) ### dataset = concatenate_datasets([gpt4_dataset, megagon_dataset, openchat_dataset]) dataset = dataset.filter(lambda x: not any([y["value"] is None for y in x["conversations"]])) dataset.select_columns(["conversations"]).to_json("/workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json") ``` </details> <br/> # workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the above described dataset. It achieves the following results on the evaluation set: - Loss: 0.6595 ## Training procedure <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B-Instruct model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json ds_type: json # see other options below type: sharegpt conversation: llama-3 dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual/prepared_tagengo_openchat_megagon val_set_size: 0.01 output_dir: /workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3 sequence_len: 8192 sample_packing: true pad_to_sequence_len: true use_wandb: true wandb_project: wandb_project wandb_entity: wandb_entity wandb_name: wandb_name gradient_accumulation_steps: 2 micro_batch_size: 2 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 5 eval_table_size: saves_per_epoch: 1 debug: deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json weight_decay: 0.0 special_tokens: pad_token: <|end_of_text|> ``` </details><br> <details><summary>Note - we added this Llama 3 template to fastchat directly as the Llama 3 chat template was not supported when we trained this model.</summary> ```python from fastchat.conversation import Conversation from fastchat.conversation import register_conv_template from fastchat.conversation import SeparatorStyle register_conv_template( Conversation( name="llama-3", system_template="<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_message}", roles=("<|start_header_id|>user<|end_header_id|>\n", "<|start_header_id|>assistant<|end_header_id|>\n"), sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE, sep="<|eot_id|>", stop_token_ids=[128009], stop_str="<|eot_id|>", ) ) ``` </details><br> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1894 | 0.0 | 1 | 1.0110 | | 0.8493 | 0.2 | 73 | 0.7057 | | 0.8047 | 0.4 | 146 | 0.6835 | | 0.7644 | 0.6 | 219 | 0.6687 | | 0.7528 | 0.8 | 292 | 0.6615 | | 0.7794 | 1.0 | 365 | 0.6595 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.0 # How to cite Please cite [this paper](https://arxiv.org/abs/2405.12612) when referencing this model. ```tex @article{devine2024tagengo, title={Tagengo: A Multilingual Chat Dataset}, author={Devine, Peter}, journal={arXiv preprint arXiv:2405.12612}, year={2024} } ``` # Developer Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn))
null
Non_BioNLP
<p align="center"> <img width=400 src="https://hf.fast360.xyz/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png" alt="Suzume - a Japanese tree sparrow"/> </p> # Suzume [[Paper](https://arxiv.org/abs/2405.12612)] [[Dataset](https://huggingface.co/datasets/lightblue/tagengo-gpt4)] This Suzume 8B, a multilingual finetune of Llama 3. Llama 3 has exhibited excellent performance on many English language benchmarks. However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in other languages. We have fine-tuned Llama 3 on more than 80,000 multilingual conversations meaning that this model has the smarts of Llama 3 but has the added ability to chat in more languages. Please feel free to comment on this model and give us feedback in the Community tab! # How to use The easiest way to use this model on your own computer is to use the [GGUF version of this model (lightblue/suzume-llama-3-8B-multilingual-gguf)](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf) using a program such as [jan.ai](https://jan.ai/) or [LM Studio](https://lmstudio.ai/). If you want to use this model directly in Python, we recommend using vLLM for the fastest inference speeds. ```python from vllm import LLM, SamplingParams sampling_params = SamplingParams(temperature=0.0, max_tokens=100) llm = LLM(model="lightblue/suzume-llama-3-8B-multilingual") messages = [] messages.append({"role": "user", "content": "Bonjour!"}) prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False) prompts = [prompt] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` # Evaluation scores We achieve the following MT-Bench scores across 6 languages: | | **meta-llama/Meta-Llama-3-8B-Instruct** | **lightblue/suzume-llama-3-8B-multilingual** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | |-----------------|-----------------------------------------|----------------------------------------------|-----------------------------------|-------------------| | **German** 🇩🇪 | NaN | 7.26 | 6.99 | 7.68 | | **French** 🇫🇷 | NaN | 7.66 | 7.29 | 7.74 | | **Japanese** 🇯🇵 | NaN | 6.56 | 6.22 | 7.84 | | **Russian** 🇷🇺 | NaN | 8.19 | 8.28 | 7.94 | | **Chinese** 🇨🇳 | NaN | 7.11 | 6.97 | 7.55 | | **English** 🇺🇸 | 7.98 | 7.73 | 7.92 | 8.26 | We observe minimal degredation of Llama 3's English ability while achieving best-in-class multilingual abilities compared to the top rated 7B model ([Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)) on the [Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard). [Here is our evaluation script.](https://drive.google.com/file/d/15HPn7452t8LbTD9HKSl7ngYYWnsoOG08/view?usp=sharing) # Training data We train on three sources of data to create this model: * [lightblue/tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4) - 76,338 conversations * A diverse dataset of initial inputs sampled from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) and then used to prompt `gpt-4-0125-preview` * [megagonlabs/instruction_ja](https://github.com/megagonlabs/instruction_ja) - 669 conversations * A hand-edited dataset of nearly 700 Japanese conversations taken originally from translations of the [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) dataset. * [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json) - 6,206 conversations * Multilingual conversations of humans talking to GPT-4. <details><summary>We prepare our data like so:</summary> ```python import pandas as pd from datasets import Dataset, load_dataset, concatenate_datasets ### Tagengo gpt4_dataset = load_dataset("lightblue/tagengo-gpt4", split="train") gpt4_dataset = gpt4_dataset.filter(lambda x: x["response"][1] == "stop") #### ### Megagon megagon_df = pd.read_json( "https://raw.githubusercontent.com/megagonlabs/instruction_ja/main/data/data.jsonl", lines=True, orient="records" ) role_map = {"user": "human", "agent": "gpt"} megagon_df["conversations"] = megagon_df.utterances.apply(lambda x: [{"from": role_map[y["name"]], "value": y["text"]} for y in x]) megagon_df["language"] = "Japanese" megagon_df = megagon_df[["conversations", "language"]] megagon_dataset = Dataset.from_pandas(df) ### ### Openchat openchat_df = pd.read_json("https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json?download=true") openchat_df["conversations"] = openchat_df["items"] openchat_dataset = Dataset.from_pandas(openchat_df) ### dataset = concatenate_datasets([gpt4_dataset, megagon_dataset, openchat_dataset]) dataset = dataset.filter(lambda x: not any([y["value"] is None for y in x["conversations"]])) dataset.select_columns(["conversations"]).to_json("/workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json") ``` </details> <br/> # workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the above described dataset. It achieves the following results on the evaluation set: - Loss: 0.6595 ## Training procedure <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B-Instruct model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json ds_type: json # see other options below type: sharegpt conversation: llama-3 dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual/prepared_tagengo_openchat_megagon val_set_size: 0.01 output_dir: /workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3 sequence_len: 8192 sample_packing: true pad_to_sequence_len: true use_wandb: true wandb_project: wandb_project wandb_entity: wandb_entity wandb_name: wandb_name gradient_accumulation_steps: 2 micro_batch_size: 2 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 5 eval_table_size: saves_per_epoch: 1 debug: deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json weight_decay: 0.0 special_tokens: pad_token: <|end_of_text|> ``` </details><br> <details><summary>Note - we added this Llama 3 template to fastchat directly as the Llama 3 chat template was not supported when we trained this model.</summary> ```python from fastchat.conversation import Conversation from fastchat.conversation import register_conv_template from fastchat.conversation import SeparatorStyle register_conv_template( Conversation( name="llama-3", system_template="<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_message}", roles=("<|start_header_id|>user<|end_header_id|>\n", "<|start_header_id|>assistant<|end_header_id|>\n"), sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE, sep="<|eot_id|>", stop_token_ids=[128009], stop_str="<|eot_id|>", ) ) ``` </details><br> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1894 | 0.0 | 1 | 1.0110 | | 0.8493 | 0.2 | 73 | 0.7057 | | 0.8047 | 0.4 | 146 | 0.6835 | | 0.7644 | 0.6 | 219 | 0.6687 | | 0.7528 | 0.8 | 292 | 0.6615 | | 0.7794 | 1.0 | 365 | 0.6595 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.0 # How to cite Please cite [this paper](https://arxiv.org/abs/2405.12612) when referencing this model. ```tex @article{devine2024tagengo, title={Tagengo: A Multilingual Chat Dataset}, author={Devine, Peter}, journal={arXiv preprint arXiv:2405.12612}, year={2024} } ``` # Developer Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn))
{"base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "license": "other", "license_name": "llama-3", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE", "tags": ["generated_from_trainer"], "model-index": [{"name": "lightblue/suzume-llama-3-8B-multilingual", "results": []}]}
task
[ "TRANSLATION" ]
43,004
Hgkang00/FT-label-consent-20
Hgkang00
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dataset_size:10K<n<100K", "loss:CoSENTLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-29T11:21:07Z
2024-05-29T11:21:24+00:00
6
0
--- base_model: sentence-transformers/all-MiniLM-L6-v2 language: [] library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - dataset_size:10K<n<100K - loss:CoSENTLoss widget: - source_sentence: Driving or commuting to work feels draining, even if it's a short distance. sentences: - Symptoms during a manic episode include decreased need for sleep, more talkative than usual, flight of ideas, distractibility - I feel like I have lost a part of myself since the traumatic event, and I struggle to connect with others on a deeper level. - Diagnosis requires at least one hypomanic episode and one major depressive episode. - source_sentence: I felt like my thoughts were disconnected and chaotic during a manic episode. sentences: - Diagnosis requires one or more manic episodes, which may be preceded or followed by hypomanic or major depressive episodes. - I feel like I have lost a part of myself since the traumatic event, and I struggle to connect with others on a deeper level. - Depressed mood for most of the day, for more days than not, as indicated by subjective account or observation, for at least 2 years. - source_sentence: My insomnia has caused me to experience frequent headaches and muscle soreness. sentences: - Insomnia or hypersomnia nearly every day. - I have difficulty standing in long lines at the grocery store or the bank due to the fear of feeling trapped or overwhelmed. - Diagnosis requires at least one hypomanic episode and one major depressive episode. - source_sentence: The phobic object or situation almost always provokes immediate fear or anxiety. sentences: - The agoraphobic situations almost always provoke fear or anxiety. - I have difficulty standing in long lines at the grocery store or the bank due to the fear of feeling trapped or overwhelmed. - Exclusion of schizoaffective disorder and depressive or bipolar disorder with psychotic features, based on the absence of concurrent depressive or manic episodes during the active-phase symptoms, or these mood episodes being present for a minority of the total duration of the active and residual phases. - source_sentence: I engage in risky behaviors like reckless driving or reckless sexual encounters. sentences: - Symptoms during a manic episode include inflated self-esteem or grandiosity,increased goal-directed activity, or excessive involvement in risky activities. - Marked decrease in functioning in areas like work, interpersonal relations, or self-care since the onset of the disturbance. - During the specified period, symptoms from Criterion A are present at least half the time with no symptom-free interval lasting longer than 2 months. model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: FT label type: FT_label metrics: - type: pearson_cosine value: 0.4627701543833943 name: Pearson Cosine - type: spearman_cosine value: 0.4076356119364853 name: Spearman Cosine - type: pearson_manhattan value: 0.48164714740150605 name: Pearson Manhattan - type: spearman_manhattan value: 0.406731043246377 name: Spearman Manhattan - type: pearson_euclidean value: 0.4840582172096936 name: Pearson Euclidean - type: spearman_euclidean value: 0.407636256115058 name: Spearman Euclidean - type: pearson_dot value: 0.46277015122653486 name: Pearson Dot - type: spearman_dot value: 0.4076359510487126 name: Spearman Dot - type: pearson_max value: 0.4840582172096936 name: Pearson Max - type: spearman_max value: 0.407636256115058 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision e4ce9877abf3edfe10b0d82785e83bdcb973e22e --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Hgkang00/FT-label-consent-20") # Run inference sentences = [ 'I engage in risky behaviors like reckless driving or reckless sexual encounters.', 'Symptoms during a manic episode include inflated self-esteem or grandiosity,increased goal-directed activity, or excessive involvement in risky activities.', 'Marked decrease in functioning in areas like work, interpersonal relations, or self-care since the onset of the disturbance.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `FT_label` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.4628 | | **spearman_cosine** | **0.4076** | | pearson_manhattan | 0.4816 | | spearman_manhattan | 0.4067 | | pearson_euclidean | 0.4841 | | spearman_euclidean | 0.4076 | | pearson_dot | 0.4628 | | spearman_dot | 0.4076 | | pearson_max | 0.4841 | | spearman_max | 0.4076 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 33,800 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 11 tokens</li><li>mean: 31.63 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 25.22 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: -1.0</li><li>mean: -0.87</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------| | <code>Presence of one or more of the following intrusion symptoms associated with the traumatic event: recurrent distressing memories, dreams, flashbacks, psychological distress, or physiological reactions to cues of the traumatic event.</code> | <code>I avoid making phone calls, even to close friends or family, because I'm afraid of saying something wrong or sounding awkward.</code> | <code>0.0</code> | | <code>The phobic object or situation almost always provokes immediate fear or anxiety.</code> | <code>I find it hard to stick to a consistent eating schedule, sometimes going days without feeling the need to eat at all.</code> | <code>-1.0</code> | | <code>The fear or anxiety is out of proportion to the actual danger posed by the specific object or situation and to the sociocultural context.</code> | <code>I have difficulty going to places where I feel there are no immediate exits, such as cinemas or auditoriums, as the fear of being stuck or unable to escape escalates my anxiety.</code> | <code>-1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 4,225 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 11 tokens</li><li>mean: 31.24 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.86 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: -1.0</li><li>mean: -0.87</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------|:------------------| | <code>Excessive anxiety and worry occurring more days than not for at least 6 months, about a number of events or activities such as work or school performance.</code> | <code>Simple activities like going for a walk or doing household chores feel like daunting tasks due to my low energy levels.</code> | <code>-1.0</code> | | <code>The individual fears acting in a way or showing anxiety symptoms that will be negatively evaluated, leading to humiliation, embarrassment, rejection, or offense to others.</code> | <code>I often find myself mindlessly snacking throughout the day due to changes in my appetite.</code> | <code>-1.0</code> | | <code>Persistent avoidance of stimuli associated with the trauma, evidenced by avoiding distressing memories, thoughts, or feelings, or external reminders of the event.</code> | <code>Simple activities like going for a walk or doing household chores feel like daunting tasks due to my low energy levels.</code> | <code>-1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `num_train_epochs`: 20 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 20 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | FT_label_spearman_cosine | |:-----:|:----:|:-------------:|:-------:|:------------------------:| | 1.0 | 265 | - | 6.9529 | 0.3450 | | 2.0 | 530 | 7.5663 | 7.1002 | 0.4103 | | 3.0 | 795 | - | 7.4786 | 0.4155 | | 4.0 | 1060 | 5.5492 | 8.6710 | 0.4115 | | 5.0 | 1325 | - | 10.3786 | 0.4056 | | 6.0 | 1590 | 4.3991 | 10.4239 | 0.3987 | | 7.0 | 1855 | - | 11.8681 | 0.4238 | | 8.0 | 2120 | 3.5916 | 13.0752 | 0.4030 | | 9.0 | 2385 | - | 12.8567 | 0.4240 | | 10.0 | 2650 | 3.1139 | 12.4373 | 0.4270 | | 11.0 | 2915 | - | 13.6725 | 0.4212 | | 12.0 | 3180 | 2.6658 | 15.0521 | 0.4134 | | 13.0 | 3445 | - | 15.4305 | 0.4114 | | 14.0 | 3710 | 2.2024 | 15.5511 | 0.4060 | | 15.0 | 3975 | - | 14.9427 | 0.4165 | | 16.0 | 4240 | 1.8955 | 14.8399 | 0.4162 | | 17.0 | 4505 | - | 15.0070 | 0.4170 | | 18.0 | 4770 | 1.712 | 15.4417 | 0.4105 | | 19.0 | 5035 | - | 15.6241 | 0.4086 | | 20.0 | 5300 | 1.5088 | 15.6818 | 0.4076 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.0 - Transformers: 4.41.1 - PyTorch: 2.3.0+cu121 - Accelerate: 0.30.1 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
BioNLP
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision e4ce9877abf3edfe10b0d82785e83bdcb973e22e --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Hgkang00/FT-label-consent-20") # Run inference sentences = [ 'I engage in risky behaviors like reckless driving or reckless sexual encounters.', 'Symptoms during a manic episode include inflated self-esteem or grandiosity,increased goal-directed activity, or excessive involvement in risky activities.', 'Marked decrease in functioning in areas like work, interpersonal relations, or self-care since the onset of the disturbance.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `FT_label` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.4628 | | **spearman_cosine** | **0.4076** | | pearson_manhattan | 0.4816 | | spearman_manhattan | 0.4067 | | pearson_euclidean | 0.4841 | | spearman_euclidean | 0.4076 | | pearson_dot | 0.4628 | | spearman_dot | 0.4076 | | pearson_max | 0.4841 | | spearman_max | 0.4076 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 33,800 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 11 tokens</li><li>mean: 31.63 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 25.22 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: -1.0</li><li>mean: -0.87</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------| | <code>Presence of one or more of the following intrusion symptoms associated with the traumatic event: recurrent distressing memories, dreams, flashbacks, psychological distress, or physiological reactions to cues of the traumatic event.</code> | <code>I avoid making phone calls, even to close friends or family, because I'm afraid of saying something wrong or sounding awkward.</code> | <code>0.0</code> | | <code>The phobic object or situation almost always provokes immediate fear or anxiety.</code> | <code>I find it hard to stick to a consistent eating schedule, sometimes going days without feeling the need to eat at all.</code> | <code>-1.0</code> | | <code>The fear or anxiety is out of proportion to the actual danger posed by the specific object or situation and to the sociocultural context.</code> | <code>I have difficulty going to places where I feel there are no immediate exits, such as cinemas or auditoriums, as the fear of being stuck or unable to escape escalates my anxiety.</code> | <code>-1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 4,225 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 11 tokens</li><li>mean: 31.24 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.86 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: -1.0</li><li>mean: -0.87</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------|:------------------| | <code>Excessive anxiety and worry occurring more days than not for at least 6 months, about a number of events or activities such as work or school performance.</code> | <code>Simple activities like going for a walk or doing household chores feel like daunting tasks due to my low energy levels.</code> | <code>-1.0</code> | | <code>The individual fears acting in a way or showing anxiety symptoms that will be negatively evaluated, leading to humiliation, embarrassment, rejection, or offense to others.</code> | <code>I often find myself mindlessly snacking throughout the day due to changes in my appetite.</code> | <code>-1.0</code> | | <code>Persistent avoidance of stimuli associated with the trauma, evidenced by avoiding distressing memories, thoughts, or feelings, or external reminders of the event.</code> | <code>Simple activities like going for a walk or doing household chores feel like daunting tasks due to my low energy levels.</code> | <code>-1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `num_train_epochs`: 20 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 20 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | FT_label_spearman_cosine | |:-----:|:----:|:-------------:|:-------:|:------------------------:| | 1.0 | 265 | - | 6.9529 | 0.3450 | | 2.0 | 530 | 7.5663 | 7.1002 | 0.4103 | | 3.0 | 795 | - | 7.4786 | 0.4155 | | 4.0 | 1060 | 5.5492 | 8.6710 | 0.4115 | | 5.0 | 1325 | - | 10.3786 | 0.4056 | | 6.0 | 1590 | 4.3991 | 10.4239 | 0.3987 | | 7.0 | 1855 | - | 11.8681 | 0.4238 | | 8.0 | 2120 | 3.5916 | 13.0752 | 0.4030 | | 9.0 | 2385 | - | 12.8567 | 0.4240 | | 10.0 | 2650 | 3.1139 | 12.4373 | 0.4270 | | 11.0 | 2915 | - | 13.6725 | 0.4212 | | 12.0 | 3180 | 2.6658 | 15.0521 | 0.4134 | | 13.0 | 3445 | - | 15.4305 | 0.4114 | | 14.0 | 3710 | 2.2024 | 15.5511 | 0.4060 | | 15.0 | 3975 | - | 14.9427 | 0.4165 | | 16.0 | 4240 | 1.8955 | 14.8399 | 0.4162 | | 17.0 | 4505 | - | 15.0070 | 0.4170 | | 18.0 | 4770 | 1.712 | 15.4417 | 0.4105 | | 19.0 | 5035 | - | 15.6241 | 0.4086 | | 20.0 | 5300 | 1.5088 | 15.6818 | 0.4076 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.0 - Transformers: 4.41.1 - PyTorch: 2.3.0+cu121 - Accelerate: 0.30.1 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "language": [], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "dataset_size:10K<n<100K", "loss:CoSENTLoss"], "widget": [{"source_sentence": "Driving or commuting to work feels draining, even if it's a short distance.", "sentences": ["Symptoms during a manic episode include decreased need for sleep, more talkative than usual, flight of ideas, distractibility", "I feel like I have lost a part of myself since the traumatic event, and I struggle to connect with others on a deeper level.", "Diagnosis requires at least one hypomanic episode and one major depressive episode."]}, {"source_sentence": "I felt like my thoughts were disconnected and chaotic during a manic episode.", "sentences": ["Diagnosis requires one or more manic episodes, which may be preceded or followed by hypomanic or major depressive episodes.", "I feel like I have lost a part of myself since the traumatic event, and I struggle to connect with others on a deeper level.", "Depressed mood for most of the day, for more days than not, as indicated by subjective account or observation, for at least 2 years."]}, {"source_sentence": "My insomnia has caused me to experience frequent headaches and muscle soreness.", "sentences": ["Insomnia or hypersomnia nearly every day.", "I have difficulty standing in long lines at the grocery store or the bank due to the fear of feeling trapped or overwhelmed.", "Diagnosis requires at least one hypomanic episode and one major depressive episode."]}, {"source_sentence": "The phobic object or situation almost always provokes immediate fear or anxiety.", "sentences": ["The agoraphobic situations almost always provoke fear or anxiety.", "I have difficulty standing in long lines at the grocery store or the bank due to the fear of feeling trapped or overwhelmed.", "Exclusion of schizoaffective disorder and depressive or bipolar disorder with psychotic features, based on the absence of concurrent depressive or manic episodes during the active-phase symptoms, or these mood episodes being present for a minority of the total duration of the active and residual phases."]}, {"source_sentence": "I engage in risky behaviors like reckless driving or reckless sexual encounters.", "sentences": ["Symptoms during a manic episode include inflated self-esteem or grandiosity,increased goal-directed activity, or excessive involvement in risky activities.", "Marked decrease in functioning in areas like work, interpersonal relations, or self-care since the onset of the disturbance.", "During the specified period, symptoms from Criterion A are present at least half the time with no symptom-free interval lasting longer than 2 months."]}], "model-index": [{"name": "SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "FT label", "type": "FT_label"}, "metrics": [{"type": "pearson_cosine", "value": 0.4627701543833943, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.4076356119364853, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.48164714740150605, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.406731043246377, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.4840582172096936, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.407636256115058, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.46277015122653486, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.4076359510487126, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.4840582172096936, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.407636256115058, "name": "Spearman Max"}]}]}]}
task
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY" ]
43,005
kmilo/distilbert-base-uncased-finetuned-emotion
kmilo
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-02-13T14:09:14Z
2023-02-13T23:42:42+00:00
11
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.925 name: Accuracy - type: f1 value: 0.925111680535546 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2164 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8143 | 1.0 | 250 | 0.3100 | 0.9075 | 0.9057 | | 0.2521 | 2.0 | 500 | 0.2164 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2164 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8143 | 1.0 | 250 | 0.3100 | 0.9075 | 0.9057 | | 0.2521 | 2.0 | 500 | 0.2164 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.925, "name": "Accuracy"}, {"type": "f1", "value": 0.925111680535546, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,006
google/t5-efficient-small-el16-dl1
google
text2text-generation
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "deep-narrow", "en", "dataset:c4", "arxiv:2109.10686", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:05Z
2023-01-24T16:48:39+00:00
115
0
--- datasets: - c4 language: - en license: apache-2.0 tags: - deep-narrow inference: false --- # T5-Efficient-SMALL-EL16-DL1 (Deep-Narrow version) T5-Efficient-SMALL-EL16-DL1 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. ## Details model architecture This model checkpoint - **t5-efficient-small-el16-dl1** - is of model type **Small** with the following variations: - **el** is **16** - **dl** is **1** It has **71.01** million parameters and thus requires *ca.* **284.04 MB** of memory in full precision (*fp32*) or **142.02 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | #Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. ## Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. ## Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. ## Downstream Performance TODO: Add table if available ## Computational Complexity TODO: Add table if available ## More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
null
Non_BioNLP
# T5-Efficient-SMALL-EL16-DL1 (Deep-Narrow version) T5-Efficient-SMALL-EL16-DL1 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. ## Details model architecture This model checkpoint - **t5-efficient-small-el16-dl1** - is of model type **Small** with the following variations: - **el** is **16** - **dl** is **1** It has **71.01** million parameters and thus requires *ca.* **284.04 MB** of memory in full precision (*fp32*) or **142.02 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | #Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. ## Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. ## Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. ## Downstream Performance TODO: Add table if available ## Computational Complexity TODO: Add table if available ## More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
{"datasets": ["c4"], "language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "inference": false}
task
[ "TEXT_CLASSIFICATION", "QUESTION_ANSWERING", "SUMMARIZATION" ]
43,007
nsuruguay05/EQASpa-7b
nsuruguay05
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "es", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-11-16T18:46:32Z
2023-11-21T13:08:02+00:00
5
0
--- language: - es --- ## EQASpa 7b This model is a fine-tuned version of [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) for the Extractive Question Answering task in spanish. It was fine-tuned on the [QuALES 2022](https://www.fing.edu.uy/inco/grupos/pln/quales/) dataset training partition using LoRA for one epoch. ## Prompt format To use the model, the following prompting format should be applied: ### TEXTO: {{Context document}} ### PREGUNTA: {{Question}} ### RESPUESTA: ## Evaluation We evaluate the model on the test partition of the QuALES dataset, and compare it with one-shot prompting as a baseline. Prompt | Model | Acc_exact | F_bertscore --- | --- | --- | --- one-shot prompting | [zephyr-7b-beta](HuggingFaceH4/zephyr-7b-beta) | 0.025 | 0.614 one-shot prompting | [Llama-2-13b-chat-hf](meta-llama/Llama-2-13b-chat-hf) | 0.192 | 0.700 default | EQASpa 7b | **0.225** | **0.713** ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
null
Non_BioNLP
## EQASpa 7b This model is a fine-tuned version of [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) for the Extractive Question Answering task in spanish. It was fine-tuned on the [QuALES 2022](https://www.fing.edu.uy/inco/grupos/pln/quales/) dataset training partition using LoRA for one epoch. ## Prompt format To use the model, the following prompting format should be applied: ### TEXTO: {{Context document}} ### PREGUNTA: {{Question}} ### RESPUESTA: ## Evaluation We evaluate the model on the test partition of the QuALES dataset, and compare it with one-shot prompting as a baseline. Prompt | Model | Acc_exact | F_bertscore --- | --- | --- | --- one-shot prompting | [zephyr-7b-beta](HuggingFaceH4/zephyr-7b-beta) | 0.025 | 0.614 one-shot prompting | [Llama-2-13b-chat-hf](meta-llama/Llama-2-13b-chat-hf) | 0.192 | 0.700 default | EQASpa 7b | **0.225** | **0.713** ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
{"language": ["es"]}
task
[ "QUESTION_ANSWERING" ]
43,009
evernight017/distilbert-base-uncased-finetuned-emotion
evernight017
text-classification
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-07-19T17:45:37Z
2024-07-19T19:01:02+00:00
108
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9335 name: Accuracy - type: f1 value: 0.9341507858797707 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1573 - Accuracy: 0.9335 - F1: 0.9342 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.1687 | 0.9325 | 0.9332 | | No log | 2.0 | 500 | 0.1573 | 0.9335 | 0.9342 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.15.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1573 - Accuracy: 0.9335 - F1: 0.9342 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.1687 | 0.9325 | 0.9332 | | No log | 2.0 | 500 | 0.1573 | 0.9335 | 0.9342 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.15.1
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9335, "name": "Accuracy"}, {"type": "f1", "value": 0.9341507858797707, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,010
csabdulelah/Ara-CANINE
csabdulelah
null
[ "pytorch", "region:us" ]
2023-12-22T13:27:19Z
2023-12-22T13:55:03+00:00
0
0
--- {} --- # Ara-CANINE Model Card ## Model Description **Ara-CANINE** is a state-of-the-art, character-based, pre-trained language model specifically developed for the Arabic language. Unlike traditional models that rely on sub-word tokenization, Ara-CANINE utilizes a free-tokenization method, processing raw characters without explicit tokenization. This approach allows the model to effectively capture the nuances of Arabic, including its various dialects and colloquial forms. ## Training Data Ara-CANINE was pre-trained on the QADI dialects tweets dataset, which includes tweets in 18 Arabic dialects, totaling 40GB of text data. The training was conducted for 40 epochs on 3 A100 GPUs, using the DeepSpeed library to optimize the training process. ## Evaluation The model's performance was evaluated using the Arabic Language Understanding Evaluation (ALUE) benchmark, consisting of eight diverse tasks that test various aspects of language understanding, such as sentiment analysis, text categorization, and dialect detection. ## Results Ara-CANINE demonstrates competitive performance compared to existing Arabic language models on the ALUE benchmark. It particularly excels in tasks involving dialects and social media text, attributed to its training on dialect-rich datasets. The model shows promise in specific linguistic contexts and has outperformed several larger models in certain tasks. ## Usage Ara-CANINE can be used for a variety of NLP tasks involving the Arabic language, including but not limited to sentiment analysis, dialect detection, and text classification. ## Availability The source code and model checkpoints for Ara-CANINE will be made publicly available on our GitHub repository, facilitating replication of experiments and further research in Arabic NLP. GitHub Repository: [Ara-CANINE GitHub](https://github.com/CsAbdulelah/Ara-CANINE.git) ## Citation If you use Ara-CANINE in your research, please cite our paper. ## Contact For questions or support regarding Ara-CANINE, please contact us at: - Email: [[email protected]](mailto:[email protected]) ## Acknowledgments This work was supported by the High-Performance Computing (HPC) Center at King Abdulaziz University, particularly the Aziz Supercomputer. --- license: apache-2.0 ---
null
Non_BioNLP
# Ara-CANINE Model Card ## Model Description **Ara-CANINE** is a state-of-the-art, character-based, pre-trained language model specifically developed for the Arabic language. Unlike traditional models that rely on sub-word tokenization, Ara-CANINE utilizes a free-tokenization method, processing raw characters without explicit tokenization. This approach allows the model to effectively capture the nuances of Arabic, including its various dialects and colloquial forms. ## Training Data Ara-CANINE was pre-trained on the QADI dialects tweets dataset, which includes tweets in 18 Arabic dialects, totaling 40GB of text data. The training was conducted for 40 epochs on 3 A100 GPUs, using the DeepSpeed library to optimize the training process. ## Evaluation The model's performance was evaluated using the Arabic Language Understanding Evaluation (ALUE) benchmark, consisting of eight diverse tasks that test various aspects of language understanding, such as sentiment analysis, text categorization, and dialect detection. ## Results Ara-CANINE demonstrates competitive performance compared to existing Arabic language models on the ALUE benchmark. It particularly excels in tasks involving dialects and social media text, attributed to its training on dialect-rich datasets. The model shows promise in specific linguistic contexts and has outperformed several larger models in certain tasks. ## Usage Ara-CANINE can be used for a variety of NLP tasks involving the Arabic language, including but not limited to sentiment analysis, dialect detection, and text classification. ## Availability The source code and model checkpoints for Ara-CANINE will be made publicly available on our GitHub repository, facilitating replication of experiments and further research in Arabic NLP. GitHub Repository: [Ara-CANINE GitHub](https://github.com/CsAbdulelah/Ara-CANINE.git) ## Citation If you use Ara-CANINE in your research, please cite our paper. ## Contact For questions or support regarding Ara-CANINE, please contact us at: - Email: [[email protected]](mailto:[email protected]) ## Acknowledgments This work was supported by the High-Performance Computing (HPC) Center at King Abdulaziz University, particularly the Aziz Supercomputer. --- license: apache-2.0 ---
{}
task
[ "TEXT_CLASSIFICATION" ]
43,011
RUCAIBox/mtl-story
RUCAIBox
text2text-generation
[ "transformers", "pytorch", "mvp", "text-generation", "text2text-generation", "en", "arxiv:2206.12131", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-06-02T12:00:10Z
2022-06-27T02:27:29+00:00
101
1
--- language: - en license: apache-2.0 pipeline_tag: text2text-generation tags: - text-generation - text2text-generation widget: - text: 'Given the story title: I think all public schools should have a uniform dress code.' example_title: Example1 - text: 'Given the story title: My girlfriend and I decided to move to a new state. We packed everything in our cars and drove there.' example_title: Example2 --- # MTL-story The MTL-story model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MTL-story is supervised pre-trained using a mixture of labeled story generation datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture. MTL-story is specially designed for story generation tasks, such as ROCStories and WritingPrompts. ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-story") >>> inputs = tokenizer( ... "Given the story title: I think all public schools should have a uniform dress code.", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs, max_length=1024) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ["I don't know about you, but I don't think it would be a good idea to have a uniform dress code in public schools. I think it's a waste of time and money. If you're going to have uniform dress codes, you need to make sure that the uniforms are appropriate for the school and that the students are comfortable in them. If they're not comfortable, then they shouldn't be allowed to wear them."] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
null
Non_BioNLP
# MTL-story The MTL-story model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MTL-story is supervised pre-trained using a mixture of labeled story generation datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture. MTL-story is specially designed for story generation tasks, such as ROCStories and WritingPrompts. ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-story") >>> inputs = tokenizer( ... "Given the story title: I think all public schools should have a uniform dress code.", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs, max_length=1024) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ["I don't know about you, but I don't think it would be a good idea to have a uniform dress code in public schools. I think it's a waste of time and money. If you're going to have uniform dress codes, you need to make sure that the uniforms are appropriate for the school and that the students are comfortable in them. If they're not comfortable, then they shouldn't be allowed to wear them."] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "text2text-generation", "tags": ["text-generation", "text2text-generation"], "widget": [{"text": "Given the story title: I think all public schools should have a uniform dress code.", "example_title": "Example1"}, {"text": "Given the story title: My girlfriend and I decided to move to a new state. We packed everything in our cars and drove there.", "example_title": "Example2"}]}
task
[ "SUMMARIZATION" ]
43,012
dekshitha-k/drug-name-SBERT
dekshitha-k
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:173752", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-11-17T19:32:42Z
2024-11-17T19:32:55+00:00
8
0
--- base_model: sentence-transformers/all-MiniLM-L6-v2 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:173752 - loss:CosineSimilarityLoss widget: - source_sentence: '[''mefzic p syrup'', '' Ladex-P Syrup'', '' Emfort P Syrup'']' sentences: - '['' Jolydep 40mg Tablet'', '' Daxtin 40mg Tablet'', '' Sylonex 40 Tablet'']' - '['' Arnost Oral Suspension'', '' Lamenta Oral Suspension'', '' Nilcold Oral Suspension'']' - '['' Mefnoc P Syrup'', '' Mefnix P Syrup'', '' Parafen Syrup'']' - source_sentence: '[''tozito 250mg tablet'', '' Azicip 250 Tablet'', '' Azikem 250mg Tablet'']' sentences: - '['' Fitzee 250 Tablet'', '' Azax 250 Tablet'', '' Zerox 250 Tablet'']' - '['' Syntran Capsule'', '' Itrafree 100mg Capsule'', '' Itromed 100 Capsule'']' - '['' Tussberry-N Syrup'', '' New Brethese Syrup'', '' Suntuss CF Syrup'']' - source_sentence: '[''beemox 250mg capsule'', '' Amoxipen 250mg Capsule'', '' Hipen A 250mg Capsule'']' sentences: - '['' Polymonax 1MIU Injection'', '' Elistin 1MIU Injection'', '' Hicoly 1Million IU Injection'']' - '['' Zylo-P 5% Solution'', '' Drez 5% Solution'', '' Betaseptic 5% Solution'']' - '['' Moxipal 250mg Capsule'', '' Amoxil 250mg Capsule'', '' Amoxible 250 Capsule'']' - source_sentence: '[''enhancin ds 400mg/57mg tablet'', '' B Cillin CV Forte 400mg/57mg Tablet'', '' Moxclav BD DS 457mg Tablet'']' sentences: - '['' Duet 400mg/57mg Tablet'', '' Polyclav-DS Tablet'', '' Xylomox CV 400mg/57mg Tablet'']' - '['' Endopace 1mg/ml Suspension'', '' Nausidome 1mg/ml Suspension'', '' Domtop 1mg/ml Suspension'']' - '['' X Aim P 0.05% Nasal Drops'', '' Dolocold Junior Nasal Drops'', '' Ingee P Nasal Drops'']' - source_sentence: '[''ampace at 5 mg/50 mg tabcap'']' sentences: - '['' Mastertaz 1000 mg/125 mg Injection'', '' Fotaran TZ 1.25gm Injection'', '' Combitum 1000mg/125mg Injection'']' - '['' Atecard M 5 mg/50 mg Tabcap'']' - '['' Laxnova Plus Syrup'', '' Lacofin P Syrup'', '' Softway Plus Syrup'']' --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("dekshitha-k/drug-name-SBERT") # Run inference sentences = [ "['ampace at 5 mg/50 mg tabcap']", "[' Atecard M 5 mg/50 mg Tabcap']", "[' Laxnova Plus Syrup', ' Lacofin P Syrup', ' Softway Plus Syrup']", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 173,752 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------| | type | string | string | int | | details | <ul><li>min: 4 tokens</li><li>mean: 29.93 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 29.71 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:--------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|:---------------| | <code>['unipod 50mg tablet dt', ' Xop 50mg Tablet DT', ' Grandcef 50mg Tablet DT']</code> | <code>[' C Pod 50mg Tablet DT', ' Codimol 50mg Tablet DT', ' Cepime O 50mg Tablet DT']</code> | <code>1</code> | | <code>['lenvima 10mg capsule', ' Lenvamed 10mg Capsule', ' Bdfoie 10mg Capsule']</code> | <code>[' Lenced 10mg Capsule', ' Lentykine 10 Capsule', ' Lenvatol 10mg Capsule']</code> | <code>1</code> | | <code>['brecold 5mg/325mg/5mg tablet', ' Snowzy Cold Tablet', ' Pcp Forte 5mg/325mg/5mg Tablet']</code> | <code>[' Ioncold-P 5mg/325mg/5mg Tablet', ' Candicold 5mg/325mg/5mg Tablet', ' Colvaxy Tablet']</code> | <code>1</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:-----:|:-------------:| | 0.0460 | 500 | 0.0384 | | 0.0921 | 1000 | 0.0045 | | 0.1381 | 1500 | 0.0009 | | 0.1842 | 2000 | 0.0003 | | 0.2302 | 2500 | 0.0002 | | 0.2762 | 3000 | 0.0001 | | 0.3223 | 3500 | 0.0001 | | 0.3683 | 4000 | 0.0001 | | 0.4144 | 4500 | 0.0 | | 0.4604 | 5000 | 0.0 | | 0.5064 | 5500 | 0.0 | | 0.5525 | 6000 | 0.0 | | 0.5985 | 6500 | 0.0 | | 0.6446 | 7000 | 0.0 | | 0.6906 | 7500 | 0.0 | | 0.7366 | 8000 | 0.0 | | 0.7827 | 8500 | 0.0 | | 0.8287 | 9000 | 0.0 | | 0.8748 | 9500 | 0.0 | | 0.9208 | 10000 | 0.0 | | 0.9669 | 10500 | 0.0 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.3.0 - Transformers: 4.46.2 - PyTorch: 2.5.1+cu121 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
BioNLP
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("dekshitha-k/drug-name-SBERT") # Run inference sentences = [ "['ampace at 5 mg/50 mg tabcap']", "[' Atecard M 5 mg/50 mg Tabcap']", "[' Laxnova Plus Syrup', ' Lacofin P Syrup', ' Softway Plus Syrup']", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 173,752 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------| | type | string | string | int | | details | <ul><li>min: 4 tokens</li><li>mean: 29.93 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 29.71 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:--------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|:---------------| | <code>['unipod 50mg tablet dt', ' Xop 50mg Tablet DT', ' Grandcef 50mg Tablet DT']</code> | <code>[' C Pod 50mg Tablet DT', ' Codimol 50mg Tablet DT', ' Cepime O 50mg Tablet DT']</code> | <code>1</code> | | <code>['lenvima 10mg capsule', ' Lenvamed 10mg Capsule', ' Bdfoie 10mg Capsule']</code> | <code>[' Lenced 10mg Capsule', ' Lentykine 10 Capsule', ' Lenvatol 10mg Capsule']</code> | <code>1</code> | | <code>['brecold 5mg/325mg/5mg tablet', ' Snowzy Cold Tablet', ' Pcp Forte 5mg/325mg/5mg Tablet']</code> | <code>[' Ioncold-P 5mg/325mg/5mg Tablet', ' Candicold 5mg/325mg/5mg Tablet', ' Colvaxy Tablet']</code> | <code>1</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:-----:|:-------------:| | 0.0460 | 500 | 0.0384 | | 0.0921 | 1000 | 0.0045 | | 0.1381 | 1500 | 0.0009 | | 0.1842 | 2000 | 0.0003 | | 0.2302 | 2500 | 0.0002 | | 0.2762 | 3000 | 0.0001 | | 0.3223 | 3500 | 0.0001 | | 0.3683 | 4000 | 0.0001 | | 0.4144 | 4500 | 0.0 | | 0.4604 | 5000 | 0.0 | | 0.5064 | 5500 | 0.0 | | 0.5525 | 6000 | 0.0 | | 0.5985 | 6500 | 0.0 | | 0.6446 | 7000 | 0.0 | | 0.6906 | 7500 | 0.0 | | 0.7366 | 8000 | 0.0 | | 0.7827 | 8500 | 0.0 | | 0.8287 | 9000 | 0.0 | | 0.8748 | 9500 | 0.0 | | 0.9208 | 10000 | 0.0 | | 0.9669 | 10500 | 0.0 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.3.0 - Transformers: 4.46.2 - PyTorch: 2.5.1+cu121 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:173752", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "['mefzic p syrup', ' Ladex-P Syrup', ' Emfort P Syrup']", "sentences": ["[' Jolydep 40mg Tablet', ' Daxtin 40mg Tablet', ' Sylonex 40 Tablet']", "[' Arnost Oral Suspension', ' Lamenta Oral Suspension', ' Nilcold Oral Suspension']", "[' Mefnoc P Syrup', ' Mefnix P Syrup', ' Parafen Syrup']"]}, {"source_sentence": "['tozito 250mg tablet', ' Azicip 250 Tablet', ' Azikem 250mg Tablet']", "sentences": ["[' Fitzee 250 Tablet', ' Azax 250 Tablet', ' Zerox 250 Tablet']", "[' Syntran Capsule', ' Itrafree 100mg Capsule', ' Itromed 100 Capsule']", "[' Tussberry-N Syrup', ' New Brethese Syrup', ' Suntuss CF Syrup']"]}, {"source_sentence": "['beemox 250mg capsule', ' Amoxipen 250mg Capsule', ' Hipen A 250mg Capsule']", "sentences": ["[' Polymonax 1MIU Injection', ' Elistin 1MIU Injection', ' Hicoly 1Million IU Injection']", "[' Zylo-P 5% Solution', ' Drez 5% Solution', ' Betaseptic 5% Solution']", "[' Moxipal 250mg Capsule', ' Amoxil 250mg Capsule', ' Amoxible 250 Capsule']"]}, {"source_sentence": "['enhancin ds 400mg/57mg tablet', ' B Cillin CV Forte 400mg/57mg Tablet', ' Moxclav BD DS 457mg Tablet']", "sentences": ["[' Duet 400mg/57mg Tablet', ' Polyclav-DS Tablet', ' Xylomox CV 400mg/57mg Tablet']", "[' Endopace 1mg/ml Suspension', ' Nausidome 1mg/ml Suspension', ' Domtop 1mg/ml Suspension']", "[' X Aim P 0.05% Nasal Drops', ' Dolocold Junior Nasal Drops', ' Ingee P Nasal Drops']"]}, {"source_sentence": "['ampace at 5 mg/50 mg tabcap']", "sentences": ["[' Mastertaz 1000 mg/125 mg Injection', ' Fotaran TZ 1.25gm Injection', ' Combitum 1000mg/125mg Injection']", "[' Atecard M 5 mg/50 mg Tabcap']", "[' Laxnova Plus Syrup', ' Lacofin P Syrup', ' Softway Plus Syrup']"]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,013
adalbertojunior/c4ai-command-r-v01
adalbertojunior
text-generation
[ "transformers", "safetensors", "cohere", "text-generation", "custom_code", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-03-18T19:34:20Z
2024-03-21T15:04:22+00:00
15
0
--- language: - en - fr - de - es - it - pt - ja - ko - zh - ar library_name: transformers license: cc-by-nc-4.0 --- # Model Card for C4AI Command-R 🚨 **This model is non-quantized version of C4AI Command-R. You can find the quantized version of C4AI Command-R using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-v01-4bit)**. ## Model Summary C4AI Command-R is a research release of a 35 billion parameter highly performant generative model. Command-R is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities. Developed by: Cohere and [Cohere For AI](https://cohere.for.ai) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-v01 - Model Size: 35 billion parameters - Context length: 128K **Use** ```python # pip install transformers from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True) # Format message with the command-r chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 8-bit precision** ```python # pip install transformers bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_8bit=True) model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, quantization_config=bnb_config) # Format message with the command-r chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 4-bit precision** ```python # pip install transformers bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_4bit=True) model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, quantization_config=bnb_config) # Format message with the command-r chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. **Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian. **Context length**: Command-R supports a context length of 128K. ### Tool use capabilities: Command-R has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command-R’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command-R may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. We recommend including the `directly_answer` tool, but encourage experimentation. Comprehensive documentation and guides on prompting strategies for tool use will be provided shortly. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command-R has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG).This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command-R’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command-R will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes . A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. The code snippet below shows a minimal working example on how to render a prompt, generate and parse a completion. Comprehensive documentation and guides on prompting strategies on grounded generation will be provided in follow-ups at a later stage. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command-R has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 35 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command-R chat in the playground [here](https://dashboard.cohere.com/playground/chat).
null
Non_BioNLP
# Model Card for C4AI Command-R 🚨 **This model is non-quantized version of C4AI Command-R. You can find the quantized version of C4AI Command-R using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-v01-4bit)**. ## Model Summary C4AI Command-R is a research release of a 35 billion parameter highly performant generative model. Command-R is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities. Developed by: Cohere and [Cohere For AI](https://cohere.for.ai) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-v01 - Model Size: 35 billion parameters - Context length: 128K **Use** ```python # pip install transformers from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True) # Format message with the command-r chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 8-bit precision** ```python # pip install transformers bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_8bit=True) model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, quantization_config=bnb_config) # Format message with the command-r chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 4-bit precision** ```python # pip install transformers bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_4bit=True) model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, quantization_config=bnb_config) # Format message with the command-r chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. **Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian. **Context length**: Command-R supports a context length of 128K. ### Tool use capabilities: Command-R has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command-R’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command-R may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. We recommend including the `directly_answer` tool, but encourage experimentation. Comprehensive documentation and guides on prompting strategies for tool use will be provided shortly. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command-R has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG).This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command-R’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command-R will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes . A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. The code snippet below shows a minimal working example on how to render a prompt, generate and parse a completion. Comprehensive documentation and guides on prompting strategies on grounded generation will be provided in follow-ups at a later stage. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command-R has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 35 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command-R chat in the playground [here](https://dashboard.cohere.com/playground/chat).
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "library_name": "transformers", "license": "cc-by-nc-4.0"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
43,014
Social-Media-Fairness/Classifier-Bias-SG
Social-Media-Fairness
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-09-15T00:38:53Z
2023-09-19T21:46:49+00:00
44
2
--- license: openrail --- # Classifier-Bias-SG Model Card ## Model Details Classifier-Bias-SG is a proof of concept model designed to classify texts based on their bias levels. The model categorizes texts into 2 classes: "Biased", and "Non-Biased". ## Model Architecture The model is built upon the distilbert-base-uncased architecture and has been fine-tuned on a custom dataset for the specific task of bias detection. ## Dataset The model was trained on a BABE dataset containing news articles from various sources, annotated with one of the 2 bias levels. The dataset contains: - **Biased**: 1810 articles - **Non-Biased**: 1810 articles ## Training Procedure The model was trained using the Adam optimizer for 15 epochs. ## Performance On our validation set, the model achieved: - **Accuracy**: 78% - **F1 Score (Biased)**: 79% - **F1 Score (Non-Biased)**: 77% ## How to Use To use this model for text classification, use the following code: ```python from transformers import pipeline from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Social-Media-Fairness/Classifier-Bias-SG") model = AutoModelForSequenceClassification.from_pretrained("Social-Media-Fairness/Classifier-Bias-SG") classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) result = classifier("Women are bad driver.") print(result) ``` Developed by Shardul Ghuge
null
Non_BioNLP
# Classifier-Bias-SG Model Card ## Model Details Classifier-Bias-SG is a proof of concept model designed to classify texts based on their bias levels. The model categorizes texts into 2 classes: "Biased", and "Non-Biased". ## Model Architecture The model is built upon the distilbert-base-uncased architecture and has been fine-tuned on a custom dataset for the specific task of bias detection. ## Dataset The model was trained on a BABE dataset containing news articles from various sources, annotated with one of the 2 bias levels. The dataset contains: - **Biased**: 1810 articles - **Non-Biased**: 1810 articles ## Training Procedure The model was trained using the Adam optimizer for 15 epochs. ## Performance On our validation set, the model achieved: - **Accuracy**: 78% - **F1 Score (Biased)**: 79% - **F1 Score (Non-Biased)**: 77% ## How to Use To use this model for text classification, use the following code: ```python from transformers import pipeline from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Social-Media-Fairness/Classifier-Bias-SG") model = AutoModelForSequenceClassification.from_pretrained("Social-Media-Fairness/Classifier-Bias-SG") classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) result = classifier("Women are bad driver.") print(result) ``` Developed by Shardul Ghuge
{"license": "openrail"}
task
[ "TEXT_CLASSIFICATION" ]
43,015
context-mt/scat-marian-big-ctx4-cwd1-en-fr
context-mt
translation
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "arxiv:2310.01188", "contextual-mt", "document-mt", "translation", "en", "fr", "dataset:inseq/scat", "dataset:gsarti/iwslt2017_context", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-13T14:03:10Z
2024-03-10T12:42:48+00:00
16
0
--- datasets: - inseq/scat - gsarti/iwslt2017_context language: - en - fr license: apache-2.0 pipeline_tag: translation tags: - arxiv:2310.01188 - contextual-mt - document-mt --- *This model corresponds to the [`Helsinki-NLP/opus-mt-tc-big-en-fr`](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr) further trained on English-to-French translation on the [IWSLT17 dataset](https://huggingface.co/datasets/gsarti/iwslt2017_context) with context tags using the format: ``` Input: SOURCE_CTX <brk> SOURCE_CURR Output: TARGET_CURR ``` and further fine-tuned on the training split of [SCAT+](https://huggingface.co/datasets/inseq/scat). The model was used in the evaluation of the paper [Quantifying the Plausibility of Context Reliance in Neural Machine Translation](https://openreview.net/forum?id=XTHfNGI3zT) published at ICLR 2024, also available on [Arxiv](https://arxiv.org/abs/2310.01188). It can be used for English to French contextual and non-contextual translation.
null
Non_BioNLP
*This model corresponds to the [`Helsinki-NLP/opus-mt-tc-big-en-fr`](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr) further trained on English-to-French translation on the [IWSLT17 dataset](https://huggingface.co/datasets/gsarti/iwslt2017_context) with context tags using the format: ``` Input: SOURCE_CTX <brk> SOURCE_CURR Output: TARGET_CURR ``` and further fine-tuned on the training split of [SCAT+](https://huggingface.co/datasets/inseq/scat). The model was used in the evaluation of the paper [Quantifying the Plausibility of Context Reliance in Neural Machine Translation](https://openreview.net/forum?id=XTHfNGI3zT) published at ICLR 2024, also available on [Arxiv](https://arxiv.org/abs/2310.01188). It can be used for English to French contextual and non-contextual translation.
{"datasets": ["inseq/scat", "gsarti/iwslt2017_context"], "language": ["en", "fr"], "license": "apache-2.0", "pipeline_tag": "translation", "tags": ["arxiv:2310.01188", "contextual-mt", "document-mt"]}
task
[ "TRANSLATION" ]
43,016
gaudi/opus-mt-de-is-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-18T14:54:30Z
2024-10-18T23:55:32+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-de-is) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-de-is).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-de-is --output_dir ./ctranslate2/opus-mt-de-is-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-de-is-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-de-is-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-de-is-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-de-is) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-de-is) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-de-is).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-de-is --output_dir ./ctranslate2/opus-mt-de-is-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-de-is-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-de-is-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-de-is-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-de-is) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
43,017
fine-tuned/jinaai_jina-embeddings-v2-base-en-03092024-12h5-webapp
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Insurance", "Health", "Coverage", "Contributions", "Regulations", "custom_code", "de", "dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-03092024-12h5-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-09-03T08:03:20Z
2024-09-03T08:03:36+00:00
10
0
--- datasets: - fine-tuned/jinaai_jina-embeddings-v2-base-en-03092024-12h5-webapp - allenai/c4 language: - de license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Insurance - Health - Coverage - Contributions - Regulations --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: health insurance information ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jinaai_jina-embeddings-v2-base-en-03092024-12h5-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: health insurance information ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jinaai_jina-embeddings-v2-base-en-03092024-12h5-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/jinaai_jina-embeddings-v2-base-en-03092024-12h5-webapp", "allenai/c4"], "language": ["de"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Insurance", "Health", "Coverage", "Contributions", "Regulations"]}
task
[ "TEXT_CLASSIFICATION" ]
43,018
RichardErkhov/NousResearch_-_Hermes-2-Pro-Mistral-7B-4bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2024-04-29T07:01:15Z
2024-04-29T07:08:10+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Hermes-2-Pro-Mistral-7B - bnb 4bits - Model creator: https://huggingface.co/NousResearch/ - Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B/ Original model description: --- base_model: mistralai/Mistral-7B-v0.1 tags: - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode model-index: - name: Hermes-2-Pro-Mistral-7B results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. --- # Hermes 2 Pro - Mistral 7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## Thank you to Latitude.sh for sponsoring compute for this model! ## Example Outputs ### Explaining Problems with Quantum Gravity: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/y_hPafyyvPb32efC5N4Es.png) ### Roleplaying as a Cosmic Super Intelligence: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/m6d6Saf7M6Luu9QnXYYAP.png) ### Detailing the Theory of AI Consciousness in JSON ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/MOLybxs7_dLjVys54imO3.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5461|± |0.0145| | | |acc_norm|0.5623|± |0.0145| |arc_easy | 0|acc |0.8157|± |0.0080| | | |acc_norm|0.7934|± |0.0083| |boolq | 1|acc |0.8688|± |0.0059| |hellaswag | 0|acc |0.6272|± |0.0048| | | |acc_norm|0.8057|± |0.0039| |openbookqa | 0|acc |0.3360|± |0.0211| | | |acc_norm|0.4300|± |0.0222| |piqa | 0|acc |0.7954|± |0.0094| | | |acc_norm|0.7998|± |0.0093| |winogrande | 0|acc |0.7230|± |0.0126| ``` Average: 71.19 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2047|± |0.0254| | | |acc_norm|0.2283|± |0.0264| |agieval_logiqa_en | 0|acc |0.3779|± |0.0190| | | |acc_norm|0.3932|± |0.0192| |agieval_lsat_ar | 0|acc |0.2652|± |0.0292| | | |acc_norm|0.2522|± |0.0287| |agieval_lsat_lr | 0|acc |0.5216|± |0.0221| | | |acc_norm|0.5137|± |0.0222| |agieval_lsat_rc | 0|acc |0.5911|± |0.0300| | | |acc_norm|0.5836|± |0.0301| |agieval_sat_en | 0|acc |0.7427|± |0.0305| | | |acc_norm|0.7184|± |0.0314| |agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348| | | |acc_norm|0.4466|± |0.0347| |agieval_sat_math | 0|acc |0.3818|± |0.0328| | | |acc_norm|0.3545|± |0.0323| ``` Average: 44.52 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361| |bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214| | | |exact_str_match |0.2256|± |0.0221| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142| |bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289| ``` Average: 41.65 ## TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4100|± |0.0172| | | |mc2 |0.5911|± |0.0158| ``` # Function Calling Evaluations We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode. ## Function Calling Accuracy: 91% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/XF3Zii4-QhE2yjWwHr_v4.png) ## JSON Mode Accuracy: 84% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/8H2iyjh5wyP2FtLq2LCed.png) Run the evaluator yourself using @interstellarninja's codebase here: https://github.com/interstellarninja/function-calling-eval You can find the evaluation datasets here: https://huggingface.co/datasets/NousResearch/func-calling-eval https://huggingface.co/datasets/NousResearch/json-mode-eval # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MistralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True) model = MistralForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Mistral-7B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Pro-Mistral-7B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)}, title={Hermes-2-Pro-Mistral-7B}, author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"} } ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Hermes-2-Pro-Mistral-7B - bnb 4bits - Model creator: https://huggingface.co/NousResearch/ - Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B/ Original model description: --- base_model: mistralai/Mistral-7B-v0.1 tags: - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode model-index: - name: Hermes-2-Pro-Mistral-7B results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. --- # Hermes 2 Pro - Mistral 7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## Thank you to Latitude.sh for sponsoring compute for this model! ## Example Outputs ### Explaining Problems with Quantum Gravity: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/y_hPafyyvPb32efC5N4Es.png) ### Roleplaying as a Cosmic Super Intelligence: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/m6d6Saf7M6Luu9QnXYYAP.png) ### Detailing the Theory of AI Consciousness in JSON ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/MOLybxs7_dLjVys54imO3.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5461|± |0.0145| | | |acc_norm|0.5623|± |0.0145| |arc_easy | 0|acc |0.8157|± |0.0080| | | |acc_norm|0.7934|± |0.0083| |boolq | 1|acc |0.8688|± |0.0059| |hellaswag | 0|acc |0.6272|± |0.0048| | | |acc_norm|0.8057|± |0.0039| |openbookqa | 0|acc |0.3360|± |0.0211| | | |acc_norm|0.4300|± |0.0222| |piqa | 0|acc |0.7954|± |0.0094| | | |acc_norm|0.7998|± |0.0093| |winogrande | 0|acc |0.7230|± |0.0126| ``` Average: 71.19 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2047|± |0.0254| | | |acc_norm|0.2283|± |0.0264| |agieval_logiqa_en | 0|acc |0.3779|± |0.0190| | | |acc_norm|0.3932|± |0.0192| |agieval_lsat_ar | 0|acc |0.2652|± |0.0292| | | |acc_norm|0.2522|± |0.0287| |agieval_lsat_lr | 0|acc |0.5216|± |0.0221| | | |acc_norm|0.5137|± |0.0222| |agieval_lsat_rc | 0|acc |0.5911|± |0.0300| | | |acc_norm|0.5836|± |0.0301| |agieval_sat_en | 0|acc |0.7427|± |0.0305| | | |acc_norm|0.7184|± |0.0314| |agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348| | | |acc_norm|0.4466|± |0.0347| |agieval_sat_math | 0|acc |0.3818|± |0.0328| | | |acc_norm|0.3545|± |0.0323| ``` Average: 44.52 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361| |bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214| | | |exact_str_match |0.2256|± |0.0221| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142| |bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289| ``` Average: 41.65 ## TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4100|± |0.0172| | | |mc2 |0.5911|± |0.0158| ``` # Function Calling Evaluations We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode. ## Function Calling Accuracy: 91% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/XF3Zii4-QhE2yjWwHr_v4.png) ## JSON Mode Accuracy: 84% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/8H2iyjh5wyP2FtLq2LCed.png) Run the evaluator yourself using @interstellarninja's codebase here: https://github.com/interstellarninja/function-calling-eval You can find the evaluation datasets here: https://huggingface.co/datasets/NousResearch/func-calling-eval https://huggingface.co/datasets/NousResearch/json-mode-eval # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MistralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True) model = MistralForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Mistral-7B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Pro-Mistral-7B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)}, title={Hermes-2-Pro-Mistral-7B}, author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"} } ```
{}
task
[ "TRANSLATION" ]
43,019
tmnam20/mdeberta-v3-base-wnli-10
tmnam20
text-classification
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "en", "dataset:tmnam20/VieGLUE", "base_model:microsoft/mdeberta-v3-base", "base_model:finetune:microsoft/mdeberta-v3-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-01-16T09:10:29Z
2024-01-16T09:12:42+00:00
4
0
--- base_model: microsoft/mdeberta-v3-base datasets: - tmnam20/VieGLUE language: - en license: mit metrics: - accuracy tags: - generated_from_trainer model-index: - name: mdeberta-v3-base-wnli-10 results: - task: type: text-classification name: Text Classification dataset: name: tmnam20/VieGLUE/WNLI type: tmnam20/VieGLUE config: wnli split: validation args: wnli metrics: - type: accuracy value: 0.5633802816901409 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base-wnli-10 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6899 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base-wnli-10 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6899 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"base_model": "microsoft/mdeberta-v3-base", "datasets": ["tmnam20/VieGLUE"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mdeberta-v3-base-wnli-10", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tmnam20/VieGLUE/WNLI", "type": "tmnam20/VieGLUE", "config": "wnli", "split": "validation", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,020
morenolq/spotify-podcast-advertising-classification
morenolq
text-classification
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "classification", "en", "dataset:spotify-podcast-dataset", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-04-05T14:36:19Z
2023-03-17T06:21:48+00:00
57
13
--- datasets: - spotify-podcast-dataset language: en tags: - bert - classification - pytorch pipeline: - text-classification widget: - text: __START__ [SEP] This is the first podcast on natural language processing applied to spoken language. - text: This is the first podcast on natural language processing applied to spoken language. [SEP] You can find us on https://twitter.com/PodcastExampleClassifier. - text: You can find us on https://twitter.com/PodcastExampleClassifier. [SEP] You can also subscribe to our newsletter https://newsletter.com/PodcastExampleClassifier. --- **General Information** This is a `bert-base-cased`, binary classification model, fine-tuned to classify a given sentence as containing advertising content or not. It leverages previous-sentence context to make more accurate predictions. The model is used in the paper 'Leveraging multimodal content for podcast summarization' published at ACM SAC 2022. **Usage:** ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained('morenolq/spotify-podcast-advertising-classification') tokenizer = AutoTokenizer.from_pretrained('morenolq/spotify-podcast-advertising-classification') desc_sentences = ["Sentence 1", "Sentence 2", "Sentence 3"] for i, s in enumerate(desc_sentences): if i==0: context = "__START__" else: context = desc_sentences[i-1] out = tokenizer(context, s, padding = "max_length", max_length = 256, truncation=True, return_attention_mask=True, return_tensors = 'pt') outputs = model(**out) print (f"{s},{outputs}") ``` The manually annotated data, used for model fine-tuning are available [here](https://github.com/MorenoLaQuatra/MATeR/blob/main/description_sentences_classification.tsv) Hereafter is the classification report of the model evaluation on the test split: ``` precision recall f1-score support 0 0.95 0.93 0.94 256 1 0.88 0.91 0.89 140 accuracy 0.92 396 macro avg 0.91 0.92 0.92 396 weighted avg 0.92 0.92 0.92 396 ``` If you find it useful, please cite the following paper: ```bibtex @inproceedings{10.1145/3477314.3507106, author = {Vaiani, Lorenzo and La Quatra, Moreno and Cagliero, Luca and Garza, Paolo}, title = {Leveraging Multimodal Content for Podcast Summarization}, year = {2022}, isbn = {9781450387132}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3477314.3507106}, doi = {10.1145/3477314.3507106}, booktitle = {Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing}, pages = {863–870}, numpages = {8}, keywords = {multimodal learning, multimodal features fusion, extractive summarization, deep learning, podcast summarization}, location = {Virtual Event}, series = {SAC '22} } ```
null
Non_BioNLP
**General Information** This is a `bert-base-cased`, binary classification model, fine-tuned to classify a given sentence as containing advertising content or not. It leverages previous-sentence context to make more accurate predictions. The model is used in the paper 'Leveraging multimodal content for podcast summarization' published at ACM SAC 2022. **Usage:** ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained('morenolq/spotify-podcast-advertising-classification') tokenizer = AutoTokenizer.from_pretrained('morenolq/spotify-podcast-advertising-classification') desc_sentences = ["Sentence 1", "Sentence 2", "Sentence 3"] for i, s in enumerate(desc_sentences): if i==0: context = "__START__" else: context = desc_sentences[i-1] out = tokenizer(context, s, padding = "max_length", max_length = 256, truncation=True, return_attention_mask=True, return_tensors = 'pt') outputs = model(**out) print (f"{s},{outputs}") ``` The manually annotated data, used for model fine-tuning are available [here](https://github.com/MorenoLaQuatra/MATeR/blob/main/description_sentences_classification.tsv) Hereafter is the classification report of the model evaluation on the test split: ``` precision recall f1-score support 0 0.95 0.93 0.94 256 1 0.88 0.91 0.89 140 accuracy 0.92 396 macro avg 0.91 0.92 0.92 396 weighted avg 0.92 0.92 0.92 396 ``` If you find it useful, please cite the following paper: ```bibtex @inproceedings{10.1145/3477314.3507106, author = {Vaiani, Lorenzo and La Quatra, Moreno and Cagliero, Luca and Garza, Paolo}, title = {Leveraging Multimodal Content for Podcast Summarization}, year = {2022}, isbn = {9781450387132}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3477314.3507106}, doi = {10.1145/3477314.3507106}, booktitle = {Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing}, pages = {863–870}, numpages = {8}, keywords = {multimodal learning, multimodal features fusion, extractive summarization, deep learning, podcast summarization}, location = {Virtual Event}, series = {SAC '22} } ```
{"datasets": ["spotify-podcast-dataset"], "language": "en", "tags": ["bert", "classification", "pytorch"], "pipeline": ["text-classification"], "widget": [{"text": "__START__ [SEP] This is the first podcast on natural language processing applied to spoken language."}, {"text": "This is the first podcast on natural language processing applied to spoken language. [SEP] You can find us on https://twitter.com/PodcastExampleClassifier."}, {"text": "You can find us on https://twitter.com/PodcastExampleClassifier. [SEP] You can also subscribe to our newsletter https://newsletter.com/PodcastExampleClassifier."}]}
task
[ "SUMMARIZATION" ]
43,022
ltg/flan-t5-definition-en-large
ltg
text2text-generation
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "definition-modeling", "en", "dataset:marksverdhei/wordnet-definitions-en-2021", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-05-21T14:28:40Z
2024-09-19T17:53:44+00:00
59
1
--- datasets: - marksverdhei/wordnet-definitions-en-2021 language: - en license: cc-by-sa-4.0 metrics: - rouge, bleu, bert-f1 tags: - text2text-generation - definition-modeling widget: - text: He ate a sweet apple. What is the definition of apple? example_title: Definition generation - text: The paper contains a number of original ideas about color perception. What is the definition of original? example_title: Definition generation model-index: - name: flan-t5-definition-en-large results: [] --- # FLAN-T5-Definition Large This model is a version of [FLAN-T5 Large](https://huggingface.co/google/flan-t5-large) finetuned on a dataset of English definitions and usage examples. It generates definitions of English words in context. Its input is the usage example and the instruction question "What is the definiton of TARGET_WORD?" This project is a collaboration between the [Dialogue Modelling Group](https://dmg-illc.github.io/dmg/) at the University of Amsterdam and the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) at the University of Oslo. ## Sizes: - [FLAN-T5-Definition Base (250M parameters)](https://huggingface.co/ltg/flan-t5-definition-en-base) - [FLAN-T5-Definition Large (780M parameters)](https://huggingface.co/ltg/flan-t5-definition-en-large) - [FLAN-T5-Definition XL (3B parameters)](https://huggingface.co/ltg/flan-t5-definition-en-xl) ## Model description See details in the paper [`Interpretable Word Sense Representations via Definition Generation: The Case of Semantic Change Analysis`](https://aclanthology.org/2023.acl-long.176/) (ACL'2023) by Mario Giulianelli, Iris Luden, Raquel Fernandez and Andrey Kutuzov. ## Intended uses & limitations The model is intended for research purposes, as a source of contextualized dictionary-like lexical definitions. The fine-tuning datasets were limited to English. Although the original FLAN-T5 is a multilingual model, we did not thoroughly evaluate its ability to generate definitions in languages other than English. Generated definitions can contain all sorts of biases and stereotypes, stemming from the underlying language model. ## Training and evaluation data Three datasets were used to fine-tune the model: - *WordNet* ([Ishiwatari et al., NAACL 2019](https://aclanthology.org/N19-1350/)), also [available on HF](https://huggingface.co/datasets/marksverdhei/wordnet-definitions-en-2021) - *Oxford dictionary or CHA* ([Gadetsky et al., ACL 2018](https://aclanthology.org/P18-2043/)) - English subset of *CodWoE* ([Mickus et al., SemEval 2022](https://aclanthology.org/2022.semeval-1.1/)) FLAN-T5-Definition Large achieves the following results on the WordNet test set: - BLEU: 14.37 - ROUGE-L: 33.74 - BERT-F1: 88.21 FLAN-T5-Definition Large achieves the following results on the Oxford dictionary test set: - BLEU: 10.90 - ROUGE-L: 30.05 - BERT-F1: 87.44 ## Training procedure FLAN-T5 Base was fine-tuned in a sequence-to-sequence mode on examples of contextualized dictionary definitions. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.1769 | 1.0 | 2740 | 1.9050 | 28.7222 | 9.1873 | 26.6888 | 26.6937 | 11.3429 | | 1.9408 | 2.0 | 5480 | 1.8151 | 29.8799 | 10.2327 | 27.7947 | 27.8044 | 11.4165 | | 1.8124 | 3.0 | 8220 | 1.7608 | 30.9845 | 10.9982 | 28.8059 | 28.8131 | 11.5310 | | 1.7118 | 4.0 | 10960 | 1.7229 | 31.6943 | 11.7412 | 29.4967 | 29.5319 | 11.7037 | | 1.6286 | 5.0 | 13700 | 1.6937 | 32.5839 | 12.2431 | 30.1799 | 30.206 | 11.7784 | | 1.5597 | 6.0 | 16440 | 1.6748 | 32.9915 | 12.8514 | 30.7016 | 30.7145 | 11.5974 | | 1.4982 | 7.0 | 19180 | 1.6578 | 33.2157 | 13.1389 | 30.9428 | 30.9519 | 11.3580 | | 1.4468 | 8.0 | 21920 | 1.6473 | 33.6146 | 13.5922 | 31.3001 | 31.3235 | 11.5724 | | 1.4022 | 9.0 | 24660 | 1.6384 | 34.1711 | 14.1117 | 31.7951 | 31.8066 | 11.7389 | | 1.364 | 10.0 | 27400 | 1.6337 | 34.5489 | 14.5012 | 32.1329 | 32.1446 | 11.6659 | | 1.3321 | 11.0 | 30140 | 1.6291 | 34.7133 | 14.7297 | 32.3042 | 32.314 | 11.8003 | | 1.3054 | 12.0 | 32880 | 1.6267 | 34.9411 | 15.0282 | 32.5335 | 32.5451 | 11.7619 | | 1.2845 | 13.0 | 35620 | 1.6262 | 35.1648 | 15.2154 | 32.7387 | 32.742 | 11.8317 | | 1.2699 | 14.0 | 38360 | 1.6257 | 35.2849 | 15.3109 | 32.8508 | 32.853 | 11.8168 | | 1.2595 | 15.0 | 41100 | 1.6273 | 35.2224 | 15.2781 | 32.7718 | 32.7826 | 11.7971 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+rocm5.1.1 - Datasets 2.4.0 - Tokenizers 0.12.1 ## Citation ``` @inproceedings{giulianelli-etal-2023-interpretable, title = "Interpretable Word Sense Representations via Definition Generation: The Case of Semantic Change Analysis", author = "Giulianelli, Mario and Luden, Iris and Fernandez, Raquel and Kutuzov, Andrey", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.176", doi = "10.18653/v1/2023.acl-long.176", pages = "3130--3148", abstract = "We propose using automatically generated natural language definitions of contextualised word usages as interpretable word and word sense representations.Given a collection of usage examples for a target word, and the corresponding data-driven usage clusters (i.e., word senses), a definition is generated for each usage with a specialised Flan-T5 language model, and the most prototypical definition in a usage cluster is chosen as the sense label. We demonstrate how the resulting sense labels can make existing approaches to semantic change analysis more interpretable, and how they can allow users {---} historical linguists, lexicographers, or social scientists {---} to explore and intuitively explain diachronic trajectories of word meaning. Semantic change analysis is only one of many possible applications of the {`}definitions as representations{'} paradigm. Beyond being human-readable, contextualised definitions also outperform token or usage sentence embeddings in word-in-context semantic similarity judgements, making them a new promising type of lexical representation for NLP.", } ```
null
Non_BioNLP
# FLAN-T5-Definition Large This model is a version of [FLAN-T5 Large](https://huggingface.co/google/flan-t5-large) finetuned on a dataset of English definitions and usage examples. It generates definitions of English words in context. Its input is the usage example and the instruction question "What is the definiton of TARGET_WORD?" This project is a collaboration between the [Dialogue Modelling Group](https://dmg-illc.github.io/dmg/) at the University of Amsterdam and the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) at the University of Oslo. ## Sizes: - [FLAN-T5-Definition Base (250M parameters)](https://huggingface.co/ltg/flan-t5-definition-en-base) - [FLAN-T5-Definition Large (780M parameters)](https://huggingface.co/ltg/flan-t5-definition-en-large) - [FLAN-T5-Definition XL (3B parameters)](https://huggingface.co/ltg/flan-t5-definition-en-xl) ## Model description See details in the paper [`Interpretable Word Sense Representations via Definition Generation: The Case of Semantic Change Analysis`](https://aclanthology.org/2023.acl-long.176/) (ACL'2023) by Mario Giulianelli, Iris Luden, Raquel Fernandez and Andrey Kutuzov. ## Intended uses & limitations The model is intended for research purposes, as a source of contextualized dictionary-like lexical definitions. The fine-tuning datasets were limited to English. Although the original FLAN-T5 is a multilingual model, we did not thoroughly evaluate its ability to generate definitions in languages other than English. Generated definitions can contain all sorts of biases and stereotypes, stemming from the underlying language model. ## Training and evaluation data Three datasets were used to fine-tune the model: - *WordNet* ([Ishiwatari et al., NAACL 2019](https://aclanthology.org/N19-1350/)), also [available on HF](https://huggingface.co/datasets/marksverdhei/wordnet-definitions-en-2021) - *Oxford dictionary or CHA* ([Gadetsky et al., ACL 2018](https://aclanthology.org/P18-2043/)) - English subset of *CodWoE* ([Mickus et al., SemEval 2022](https://aclanthology.org/2022.semeval-1.1/)) FLAN-T5-Definition Large achieves the following results on the WordNet test set: - BLEU: 14.37 - ROUGE-L: 33.74 - BERT-F1: 88.21 FLAN-T5-Definition Large achieves the following results on the Oxford dictionary test set: - BLEU: 10.90 - ROUGE-L: 30.05 - BERT-F1: 87.44 ## Training procedure FLAN-T5 Base was fine-tuned in a sequence-to-sequence mode on examples of contextualized dictionary definitions. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.1769 | 1.0 | 2740 | 1.9050 | 28.7222 | 9.1873 | 26.6888 | 26.6937 | 11.3429 | | 1.9408 | 2.0 | 5480 | 1.8151 | 29.8799 | 10.2327 | 27.7947 | 27.8044 | 11.4165 | | 1.8124 | 3.0 | 8220 | 1.7608 | 30.9845 | 10.9982 | 28.8059 | 28.8131 | 11.5310 | | 1.7118 | 4.0 | 10960 | 1.7229 | 31.6943 | 11.7412 | 29.4967 | 29.5319 | 11.7037 | | 1.6286 | 5.0 | 13700 | 1.6937 | 32.5839 | 12.2431 | 30.1799 | 30.206 | 11.7784 | | 1.5597 | 6.0 | 16440 | 1.6748 | 32.9915 | 12.8514 | 30.7016 | 30.7145 | 11.5974 | | 1.4982 | 7.0 | 19180 | 1.6578 | 33.2157 | 13.1389 | 30.9428 | 30.9519 | 11.3580 | | 1.4468 | 8.0 | 21920 | 1.6473 | 33.6146 | 13.5922 | 31.3001 | 31.3235 | 11.5724 | | 1.4022 | 9.0 | 24660 | 1.6384 | 34.1711 | 14.1117 | 31.7951 | 31.8066 | 11.7389 | | 1.364 | 10.0 | 27400 | 1.6337 | 34.5489 | 14.5012 | 32.1329 | 32.1446 | 11.6659 | | 1.3321 | 11.0 | 30140 | 1.6291 | 34.7133 | 14.7297 | 32.3042 | 32.314 | 11.8003 | | 1.3054 | 12.0 | 32880 | 1.6267 | 34.9411 | 15.0282 | 32.5335 | 32.5451 | 11.7619 | | 1.2845 | 13.0 | 35620 | 1.6262 | 35.1648 | 15.2154 | 32.7387 | 32.742 | 11.8317 | | 1.2699 | 14.0 | 38360 | 1.6257 | 35.2849 | 15.3109 | 32.8508 | 32.853 | 11.8168 | | 1.2595 | 15.0 | 41100 | 1.6273 | 35.2224 | 15.2781 | 32.7718 | 32.7826 | 11.7971 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+rocm5.1.1 - Datasets 2.4.0 - Tokenizers 0.12.1 ## Citation ``` @inproceedings{giulianelli-etal-2023-interpretable, title = "Interpretable Word Sense Representations via Definition Generation: The Case of Semantic Change Analysis", author = "Giulianelli, Mario and Luden, Iris and Fernandez, Raquel and Kutuzov, Andrey", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.176", doi = "10.18653/v1/2023.acl-long.176", pages = "3130--3148", abstract = "We propose using automatically generated natural language definitions of contextualised word usages as interpretable word and word sense representations.Given a collection of usage examples for a target word, and the corresponding data-driven usage clusters (i.e., word senses), a definition is generated for each usage with a specialised Flan-T5 language model, and the most prototypical definition in a usage cluster is chosen as the sense label. We demonstrate how the resulting sense labels can make existing approaches to semantic change analysis more interpretable, and how they can allow users {---} historical linguists, lexicographers, or social scientists {---} to explore and intuitively explain diachronic trajectories of word meaning. Semantic change analysis is only one of many possible applications of the {`}definitions as representations{'} paradigm. Beyond being human-readable, contextualised definitions also outperform token or usage sentence embeddings in word-in-context semantic similarity judgements, making them a new promising type of lexical representation for NLP.", } ```
{"datasets": ["marksverdhei/wordnet-definitions-en-2021"], "language": ["en"], "license": "cc-by-sa-4.0", "metrics": ["rouge, bleu, bert-f1"], "tags": ["text2text-generation", "definition-modeling"], "widget": [{"text": "He ate a sweet apple. What is the definition of apple?", "example_title": "Definition generation"}, {"text": "The paper contains a number of original ideas about color perception. What is the definition of original?", "example_title": "Definition generation"}], "model-index": [{"name": "flan-t5-definition-en-large", "results": []}]}
task
[ "SEMANTIC_SIMILARITY" ]
43,023
AmelieSchreiber/esm2_t12_35M_ptm_qlora_2100K
AmelieSchreiber
null
[ "peft", "biology", "protein language model", "ESM-2", "post translational modification", "en", "license:mit", "region:us" ]
2023-10-16T00:29:59Z
2023-10-16T00:45:10+00:00
5
0
--- language: - en library_name: peft license: mit tags: - biology - protein language model - ESM-2 - post translational modification --- # ESM-2 QLoRA for Post Translational Modification ```python "eval_loss": 0.28556737303733826, "eval_accuracy": 0.9762591331328516, "eval_auc": 0.8833701456278934, "eval_f1": 0.1542571794425746, "eval_mcc": 0.25511446421928063, "eval_precision": 0.08547382057474782, "eval_recall": 0.7899691877651231, ``` ## Using the Model ```python from transformers import AutoModelForTokenClassification, AutoTokenizer from peft import PeftModel import torch # Path to the saved LoRA model model_path = "AmelieSchreiber/esm2_t12_35M_ptm_qlora_2100K" # ESM2 base model base_model_path = "facebook/esm2_t12_35M_UR50D" # Load the model base_model = AutoModelForTokenClassification.from_pretrained(base_model_path) loaded_model = PeftModel.from_pretrained(base_model, model_path) # Ensure the model is in evaluation mode loaded_model.eval() # Load the tokenizer loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path) # Protein sequence for inference protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence # Tokenize the sequence inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length') # Run the model with torch.no_grad(): logits = loaded_model(**inputs).logits # Get predictions tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens predictions = torch.argmax(logits, dim=2) # Define labels id2label = { 0: "No ptm site", 1: "ptm site" } # Print the predicted labels for each token for token, prediction in zip(tokens, predictions[0].numpy()): if token not in ['<pad>', '<cls>', '<eos>']: print((token, id2label[prediction])) ```
null
Non_BioNLP
# ESM-2 QLoRA for Post Translational Modification ```python "eval_loss": 0.28556737303733826, "eval_accuracy": 0.9762591331328516, "eval_auc": 0.8833701456278934, "eval_f1": 0.1542571794425746, "eval_mcc": 0.25511446421928063, "eval_precision": 0.08547382057474782, "eval_recall": 0.7899691877651231, ``` ## Using the Model ```python from transformers import AutoModelForTokenClassification, AutoTokenizer from peft import PeftModel import torch # Path to the saved LoRA model model_path = "AmelieSchreiber/esm2_t12_35M_ptm_qlora_2100K" # ESM2 base model base_model_path = "facebook/esm2_t12_35M_UR50D" # Load the model base_model = AutoModelForTokenClassification.from_pretrained(base_model_path) loaded_model = PeftModel.from_pretrained(base_model, model_path) # Ensure the model is in evaluation mode loaded_model.eval() # Load the tokenizer loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path) # Protein sequence for inference protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence # Tokenize the sequence inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length') # Run the model with torch.no_grad(): logits = loaded_model(**inputs).logits # Get predictions tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens predictions = torch.argmax(logits, dim=2) # Define labels id2label = { 0: "No ptm site", 1: "ptm site" } # Print the predicted labels for each token for token, prediction in zip(tokens, predictions[0].numpy()): if token not in ['<pad>', '<cls>', '<eos>']: print((token, id2label[prediction])) ```
{"language": ["en"], "library_name": "peft", "license": "mit", "tags": ["biology", "protein language model", "ESM-2", "post translational modification"]}
task
[ "TRANSLATION" ]
43,024
Iker/Llama-3-Instruct-Neurona-8b
Iker
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "synthetic", "conversational", "es", "en", "dataset:pinzhenchen/alpaca-cleaned-es", "dataset:Danielbrdz/Barcenas-Economia", "dataset:HiTZ/casimedicos-exp", "dataset:somosnlp/coser_resumenes", "dataset:csebuetnlp/CrossSum", "dataset:Iker/Document-Translation-en-es", "dataset:somosnlp/es-inclusive-language-it", "dataset:FreedomIntelligence/evol-instruct-spanish", "dataset:glaiveai/glaive-code-assistant-v3", "dataset:glaiveai/glaive-function-calling-v2", "dataset:Iker/InstructTranslation-EN-ES", "dataset:somosnlp/lenguaje-claro-dataset", "dataset:somosnlp/LingComp_QA", "dataset:bltlab/lr-sum", "dataset:Iker/NoticIA", "dataset:xaviviro/oasst2_es_gpt", "dataset:teknium/OpenHermes-2.5", "dataset:Iker/OpenHermes-2.5-Spanish", "dataset:Helsinki-NLP/opus-100", "dataset:projecte-aina/RAG_Multilingual", "dataset:sem_eval_2018_task_1", "dataset:davidstap/ted_talks", "dataset:HiTZ/This-is-not-a-dataset", "dataset:wikipedia", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-02T07:59:01Z
2024-06-02T08:04:03+00:00
8
3
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - pinzhenchen/alpaca-cleaned-es - Danielbrdz/Barcenas-Economia - HiTZ/casimedicos-exp - somosnlp/coser_resumenes - csebuetnlp/CrossSum - Iker/Document-Translation-en-es - somosnlp/es-inclusive-language-it - FreedomIntelligence/evol-instruct-spanish - glaiveai/glaive-code-assistant-v3 - glaiveai/glaive-function-calling-v2 - Iker/InstructTranslation-EN-ES - somosnlp/lenguaje-claro-dataset - somosnlp/LingComp_QA - bltlab/lr-sum - Iker/NoticIA - xaviviro/oasst2_es_gpt - teknium/OpenHermes-2.5 - Iker/OpenHermes-2.5-Spanish - Helsinki-NLP/opus-100 - projecte-aina/RAG_Multilingual - sem_eval_2018_task_1 - davidstap/ted_talks - HiTZ/This-is-not-a-dataset - wikipedia language: - es - en library_name: transformers license: llama3 pipeline_tag: text-generation tags: - synthetic --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/614a1ebb8f82f1df64d55126/2i_CasoeJTgQPNoBIfA8E.jpeg) # Neurona 8B Beta: Un Modelo de Lenguage en Español > Esta es una versión preliminar del dataset card. El modelo está en desarrollo y no es la versión final. Si quieres saber más sobre este modelo, escribe a [email protected] Neurona 8B es un modelo de lenguaje en Español. Esta es la primera iteración y un experimento para poner a punto los scripts y la infraestructura. Neurona 8B ha sido entrenado con los siguiente datasets. No en todos los casos se ha usado el dataset completo - [pinzhenchen/alpaca-cleaned-es](https://huggingface.co/datasets/pinzhenchen/alpaca-cleaned-es) - [Danielbrdz/Barcenas-Economia](https://huggingface.co/datasets/Danielbrdz/Barcenas-Economia) - [HiTZ/casimedicos-exp](https://huggingface.co/datasets/HiTZ/casimedicos-exp) - [somosnlp/coser_resumenes](https://huggingface.co/datasets/somosnlp/coser_resumenes) - [csebuetnlp/CrossSum en + es](https://huggingface.co/datasets/csebuetnlp/CrossSum) - [Iker/Document-Translation-en-es](https://huggingface.co/datasets/Iker/Document-Translation-en-es) - [somosnlp/es-inclusive-language-it](https://huggingface.co/datasets/somosnlp/es-inclusive-language-it) - [FreedomIntelligence/evol-instruct-spanish](https://huggingface.co/datasets/FreedomIntelligence/evol-instruct-spanish) - [glaiveai/glaive-code-assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3) - [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) - [Iker/InstructTranslation-EN-ES](https://huggingface.co/datasets/Iker/InstructTranslation-EN-ES) - [somosnlp/lenguaje-claro-dataset](https://huggingface.co/datasets/somosnlp/lenguaje-claro-dataset) - [somosnlp/LingComp_QA](https://huggingface.co/datasets/somosnlp/LingComp_QA) - [bltlab/lr-sum](https://huggingface.co/datasets/bltlab/lr-sum) - [Iker/NoticIA](https://huggingface.co/datasets/Iker/NoticIA) - [xaviviro/oasst2_es_gpt](https://huggingface.co/datasets/xaviviro/oasst2_es_gpt) - [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) - [Iker/OpenHermes-2.5-Spanish](https://huggingface.co/datasets/Iker/OpenHermes-2.5-Spanish) - [Helsinki-NLP/opus-100 en es](https://huggingface.co/datasets/Helsinki-NLP/opus-100) - [projecte-aina/RAG_Multilingual](https://huggingface.co/datasets/projecte-aina/RAG_Multilingual) - [sem_eval_2018_task_1](https://huggingface.co/datasets/sem_eval_2018_task_1) - [davidstap/ted_talks](https://huggingface.co/datasets/davidstap/ted_talks) - [HiTZ/This-is-not-a-dataset](https://huggingface.co/datasets/HiTZ/This-is-not-a-dataset) - [wikipedia es](https://huggingface.co/datasets/wikipedia) Esta mezcla de datasets en Inglés y Español, permite al modelo adquirir diferentes capacidades, como RAG, function calling, code assistant, question answering, summarization... tanto en Inglés como en Español. # Entrenamiento Este modelo se ha entrado usando 4xNvidia A100 80Gb y axolotl [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) Esta es la configuración usada ```yaml base_model: meta-llama/Meta-Llama-3-8B-Instruct model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer is_falcon_derived_model: is_llama_derived_model: is_qwen_derived_model: is_mistral_derived_model: load_in_8bit: false load_in_4bit: false strict: false device_map: null datasets: - path: /ikerlariak/igarcia945/InstructDatasets/alpaca-cleaned-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/Barcenas-Economia.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/casimedicos.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/coser_resumene.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/CrossSum_en.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/CrossSum_es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/Document-Translation-en-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/es-inclusive-language.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/evol-instruct-spanish.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/glaive-code-assistant-v3-small.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/glaive-function-calling-v2.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt - tool output: - human - path: /ikerlariak/igarcia945/InstructDatasets/InstructTranslation-EN-ES.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/lenguaje-claro-dataset.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/LingComp_QA.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/lr-sum-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/NoticIA.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/NoticIA-large.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/NoticIA-summary.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/oasst2_es_gpt.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/OpenHermes-2.5-English.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/OpenHermes-2.5-Spanish.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/opus-100-en-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/RAG_Multilingual-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/sem_eval_2018_task_1.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/ted_talks-es_en.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/This-is-not-a-dataset.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/wikipedia-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human chat_template: llama3 dataset_prepared_path: /ikerlariak/igarcia945/Mortadelo-Filemon/Meta-Llama-3-8B-Instruct-Spanish/dataset shuffle_merged_datasets: true val_set_size: 0.005 output_dir: /ikerlariak/igarcia945/Mortadelo-Filemon/Meta-Llama-3-8B-Instruct-Spanish adapter: lora_model_dir: sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: false tokens: - "<tool_call>" - "<tool_response>" - "<tools>" - "</tool_call>" - "</tool_response>" - "</tools>" - "<reserved1>" - "<reserved2>" special_tokens: pad_token: <|end_of_text|> neftune_noise_alpha: 5 wandb_project: Mortadelo&Filemon wandb_entity: igarciaf wandb_watch: wandb_name: Meta-Llama-3-8B-Instruct-Spanish wandb_log_model: gradient_accumulation_steps: 32 micro_batch_size: 2 eval_batch_size: 2 num_epochs: 2 optimizer: adamw_torch_fused lr_scheduler: cosine learning_rate: 0.00007 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_ratio: 0.03 evals_per_epoch: 4 eval_table_size: save_strategy: "no" debug: deepspeed: /ikerlariak/igarcia945/Mortadelo-Filemon/train_configs/deepspeed_zero3.json weight_decay: 0.0 fsdp: fsdp_config: seed: 33 ```
null
Non_BioNLP
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/614a1ebb8f82f1df64d55126/2i_CasoeJTgQPNoBIfA8E.jpeg) # Neurona 8B Beta: Un Modelo de Lenguage en Español > Esta es una versión preliminar del dataset card. El modelo está en desarrollo y no es la versión final. Si quieres saber más sobre este modelo, escribe a [email protected] Neurona 8B es un modelo de lenguaje en Español. Esta es la primera iteración y un experimento para poner a punto los scripts y la infraestructura. Neurona 8B ha sido entrenado con los siguiente datasets. No en todos los casos se ha usado el dataset completo - [pinzhenchen/alpaca-cleaned-es](https://huggingface.co/datasets/pinzhenchen/alpaca-cleaned-es) - [Danielbrdz/Barcenas-Economia](https://huggingface.co/datasets/Danielbrdz/Barcenas-Economia) - [HiTZ/casimedicos-exp](https://huggingface.co/datasets/HiTZ/casimedicos-exp) - [somosnlp/coser_resumenes](https://huggingface.co/datasets/somosnlp/coser_resumenes) - [csebuetnlp/CrossSum en + es](https://huggingface.co/datasets/csebuetnlp/CrossSum) - [Iker/Document-Translation-en-es](https://huggingface.co/datasets/Iker/Document-Translation-en-es) - [somosnlp/es-inclusive-language-it](https://huggingface.co/datasets/somosnlp/es-inclusive-language-it) - [FreedomIntelligence/evol-instruct-spanish](https://huggingface.co/datasets/FreedomIntelligence/evol-instruct-spanish) - [glaiveai/glaive-code-assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3) - [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) - [Iker/InstructTranslation-EN-ES](https://huggingface.co/datasets/Iker/InstructTranslation-EN-ES) - [somosnlp/lenguaje-claro-dataset](https://huggingface.co/datasets/somosnlp/lenguaje-claro-dataset) - [somosnlp/LingComp_QA](https://huggingface.co/datasets/somosnlp/LingComp_QA) - [bltlab/lr-sum](https://huggingface.co/datasets/bltlab/lr-sum) - [Iker/NoticIA](https://huggingface.co/datasets/Iker/NoticIA) - [xaviviro/oasst2_es_gpt](https://huggingface.co/datasets/xaviviro/oasst2_es_gpt) - [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) - [Iker/OpenHermes-2.5-Spanish](https://huggingface.co/datasets/Iker/OpenHermes-2.5-Spanish) - [Helsinki-NLP/opus-100 en es](https://huggingface.co/datasets/Helsinki-NLP/opus-100) - [projecte-aina/RAG_Multilingual](https://huggingface.co/datasets/projecte-aina/RAG_Multilingual) - [sem_eval_2018_task_1](https://huggingface.co/datasets/sem_eval_2018_task_1) - [davidstap/ted_talks](https://huggingface.co/datasets/davidstap/ted_talks) - [HiTZ/This-is-not-a-dataset](https://huggingface.co/datasets/HiTZ/This-is-not-a-dataset) - [wikipedia es](https://huggingface.co/datasets/wikipedia) Esta mezcla de datasets en Inglés y Español, permite al modelo adquirir diferentes capacidades, como RAG, function calling, code assistant, question answering, summarization... tanto en Inglés como en Español. # Entrenamiento Este modelo se ha entrado usando 4xNvidia A100 80Gb y axolotl [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) Esta es la configuración usada ```yaml base_model: meta-llama/Meta-Llama-3-8B-Instruct model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer is_falcon_derived_model: is_llama_derived_model: is_qwen_derived_model: is_mistral_derived_model: load_in_8bit: false load_in_4bit: false strict: false device_map: null datasets: - path: /ikerlariak/igarcia945/InstructDatasets/alpaca-cleaned-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/Barcenas-Economia.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/casimedicos.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/coser_resumene.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/CrossSum_en.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/CrossSum_es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/Document-Translation-en-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/es-inclusive-language.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/evol-instruct-spanish.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/glaive-code-assistant-v3-small.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/glaive-function-calling-v2.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt - tool output: - human - path: /ikerlariak/igarcia945/InstructDatasets/InstructTranslation-EN-ES.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/lenguaje-claro-dataset.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/LingComp_QA.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/lr-sum-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/NoticIA.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/NoticIA-large.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/NoticIA-summary.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/oasst2_es_gpt.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/OpenHermes-2.5-English.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/OpenHermes-2.5-Spanish.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/opus-100-en-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/RAG_Multilingual-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/sem_eval_2018_task_1.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/ted_talks-es_en.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/This-is-not-a-dataset.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human - path: /ikerlariak/igarcia945/InstructDatasets/wikipedia-es.jsonl type: sharegpt conversation: llama3 field: conversations roles: input: - system - gpt output: - human chat_template: llama3 dataset_prepared_path: /ikerlariak/igarcia945/Mortadelo-Filemon/Meta-Llama-3-8B-Instruct-Spanish/dataset shuffle_merged_datasets: true val_set_size: 0.005 output_dir: /ikerlariak/igarcia945/Mortadelo-Filemon/Meta-Llama-3-8B-Instruct-Spanish adapter: lora_model_dir: sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: false tokens: - "<tool_call>" - "<tool_response>" - "<tools>" - "</tool_call>" - "</tool_response>" - "</tools>" - "<reserved1>" - "<reserved2>" special_tokens: pad_token: <|end_of_text|> neftune_noise_alpha: 5 wandb_project: Mortadelo&Filemon wandb_entity: igarciaf wandb_watch: wandb_name: Meta-Llama-3-8B-Instruct-Spanish wandb_log_model: gradient_accumulation_steps: 32 micro_batch_size: 2 eval_batch_size: 2 num_epochs: 2 optimizer: adamw_torch_fused lr_scheduler: cosine learning_rate: 0.00007 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_ratio: 0.03 evals_per_epoch: 4 eval_table_size: save_strategy: "no" debug: deepspeed: /ikerlariak/igarcia945/Mortadelo-Filemon/train_configs/deepspeed_zero3.json weight_decay: 0.0 fsdp: fsdp_config: seed: 33 ```
{"base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "datasets": ["pinzhenchen/alpaca-cleaned-es", "Danielbrdz/Barcenas-Economia", "HiTZ/casimedicos-exp", "somosnlp/coser_resumenes", "csebuetnlp/CrossSum", "Iker/Document-Translation-en-es", "somosnlp/es-inclusive-language-it", "FreedomIntelligence/evol-instruct-spanish", "glaiveai/glaive-code-assistant-v3", "glaiveai/glaive-function-calling-v2", "Iker/InstructTranslation-EN-ES", "somosnlp/lenguaje-claro-dataset", "somosnlp/LingComp_QA", "bltlab/lr-sum", "Iker/NoticIA", "xaviviro/oasst2_es_gpt", "teknium/OpenHermes-2.5", "Iker/OpenHermes-2.5-Spanish", "Helsinki-NLP/opus-100", "projecte-aina/RAG_Multilingual", "sem_eval_2018_task_1", "davidstap/ted_talks", "HiTZ/This-is-not-a-dataset", "wikipedia"], "language": ["es", "en"], "library_name": "transformers", "license": "llama3", "pipeline_tag": "text-generation", "tags": ["synthetic"]}
task
[ "QUESTION_ANSWERING", "TRANSLATION", "SUMMARIZATION" ]
43,025
milandean/gemma-7b
milandean
text-generation
[ "transformers", "safetensors", "gguf", "gemma", "text-generation", "arxiv:2305.14314", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-09-09T17:21:04Z
2024-09-09T20:23:09+00:00
36
0
--- library_name: transformers license: gemma extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-7b) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Context Length Models are trained on a context length of 8192 tokens. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning examples You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314) * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook [here](https://github.com/huggingface/notebooks/blob/main/peft/gemma_7b_english_quotes.ipynb). #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", revision="float16") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **45.0** | **56.9** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
null
Non_BioNLP
# Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-7b) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Context Length Models are trained on a context length of 8192 tokens. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning examples You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314) * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook [here](https://github.com/huggingface/notebooks/blob/main/peft/gemma_7b_english_quotes.ipynb). #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", revision="float16") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **45.0** | **56.9** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
{"library_name": "transformers", "license": "gemma", "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
43,026
jcblaise/bert-tagalog-base-cased-WWM
jcblaise
fill-mask
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "tagalog", "filipino", "tl", "arxiv:2005.02068", "arxiv:1907.00409", "license:gpl-3.0", "autotrain_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-11-12T03:21:18+00:00
138
0
--- language: tl license: gpl-3.0 tags: - bert - tagalog - filipino inference: false --- **Deprecation Notice** This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available. Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance. --- # BERT Tagalog Base Cased (Whole Word Masking) Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking. ## Citations All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work: ``` @article{cruz2020establishing, title={Establishing Baselines for Text Classification in Low-Resource Languages}, author={Cruz, Jan Christian Blaise and Cheng, Charibeth}, journal={arXiv preprint arXiv:2005.02068}, year={2020} } @article{cruz2019evaluating, title={Evaluating Language Model Finetuning Techniques for Low-resource Languages}, author={Cruz, Jan Christian Blaise and Cheng, Charibeth}, journal={arXiv preprint arXiv:1907.00409}, year={2019} } ``` ## Data and Other Resources Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com ## Contact If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
null
Non_BioNLP
**Deprecation Notice** This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available. Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance. --- # BERT Tagalog Base Cased (Whole Word Masking) Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking. ## Citations All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work: ``` @article{cruz2020establishing, title={Establishing Baselines for Text Classification in Low-Resource Languages}, author={Cruz, Jan Christian Blaise and Cheng, Charibeth}, journal={arXiv preprint arXiv:2005.02068}, year={2020} } @article{cruz2019evaluating, title={Evaluating Language Model Finetuning Techniques for Low-resource Languages}, author={Cruz, Jan Christian Blaise and Cheng, Charibeth}, journal={arXiv preprint arXiv:1907.00409}, year={2019} } ``` ## Data and Other Resources Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com ## Contact If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
{"language": "tl", "license": "gpl-3.0", "tags": ["bert", "tagalog", "filipino"], "inference": false}
task
[ "TEXT_CLASSIFICATION" ]
43,027
google/paligemma-3b-ft-stvqa-448-jax
google
image-text-to-text
[ "big_vision", "paligemma", "jax", "image-text-to-text", "arxiv:1905.13648", "arxiv:2310.09199", "arxiv:2303.15343", "arxiv:2403.08295", "arxiv:1706.03762", "arxiv:2010.11929", "arxiv:2209.06794", "arxiv:2209.04372", "arxiv:2103.01913", "arxiv:2401.06209", "arxiv:2305.10355", "arxiv:2205.12522", "arxiv:2110.11624", "arxiv:2108.03353", "arxiv:2010.04295", "arxiv:2203.10244", "arxiv:1810.12440", "arxiv:1608.00272", "arxiv:1908.04913", "arxiv:2407.07726", "license:gemma", "region:us" ]
2024-05-11T23:48:40Z
2024-07-19T12:09:16+00:00
0
0
--- library_name: big_vision license: gemma pipeline_tag: image-text-to-text tags: - paligemma - jax extra_gated_heading: Access PaliGemma on Hugging Face extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # PaliGemma model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) JAX/FLAX PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://arxiv.org/abs/1905.13648">SceneText_VQA</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/stvqa.py">big_vision</a>. **Resources and technical documentation:** * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma) * [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363) **Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-stvqa-448-jax) **Authors:** Google ## Model information ### Model summary #### Description PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma language model](https://arxiv.org/abs/2403.08295). It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion params. The text decoder is initialized from [Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). PaliGemma is trained following the PaLI-3 recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. ### Model data #### Pre-train datasets PaliGemma is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma on clean data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## Implementation information ### Hardware PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### Mix model (fine-tune on mixture of transfer tasks) <table> <tbody><tr> <th>Benchmark</th> <th>Metric (split)</th> <th>mix-224</th> <th>mix-448</th> </tr> <tr> <td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td> <td>Paired Accuracy</td> <td>46.00</td> <td>45.33</td> </tr> <tr> <td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td> <td>Accuracy<br>(random/popular/adversarial)</td> <td> 88.00<br> 86.63<br> 85.67 </td> <td> 89.37<br> 88.40<br> 87.47 </td> </tr> <tr> <td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td> <td>Accuracy (test)</td> <td>65.20</td> <td>65.47</td> </tr> </tbody></table> #### Single task (fine-tune on single task) <table> <tbody><tr> <th>Benchmark<br>(train split)</th> <th>Metric<br>(split)</th> <th>pt-224</th> <th>pt-448</th> <th>pt-896</th> </tr> <tr> <th>Captioning</th> </tr> <tr> <td> <a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval) </td> <td>CIDEr (val)</td> <td>141.92</td> <td>144.60</td> </tr> <tr> <td> <a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer) </td> <td>CIDEr (val)</td> <td>121.72</td> <td>123.58</td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 139.2<br> 115.8<br> 116.4 </td> <td> 141.2<br> 118.0<br> 118.6 </td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 78.1<br> 41.3<br> 42.4 </td> <td> 80.0<br> 41.9<br> 42.9 </td> </tr> <tr> <td> <a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train) </td> <td>CIDEr (val)</td> <td>127.48</td> <td>153.94</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val) </td> <td>CIDEr/BLEU-4<br>(test)</td> <td> 162.25<br> 0.192<br> </td> <td> 181.49<br> 0.211<br> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>117.57</td> <td>119.59</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>136.07</td> <td>148.36</td> </tr> <tr> <th>Question answering</th> </tr> <tr> <td> <a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation) </td> <td>Accuracy<br>(Test server - std)</td> <td>83.19</td> <td>85.64</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer) </td> <td>Paired Accuracy</td> <td>47.33</td> <td>45.33</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer) </td> <td>Accuracy<br>(random/popular/<br>adversarial)</td> <td> 87.80<br> 85.87<br> 84.27 </td> <td> 88.23<br> 86.77<br> 85.90 </td> </tr> <tr> <td> <a href="https://okvqa.allenai.org/">OKVQA</a><br>(train) </td> <td>Accuracy (val)</td> <td>63.54</td> <td>63.15</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>76.37</td> <td>76.90</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>61.85</td> <td>63.22</td> </tr> <tr> <td> <a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced) </td> <td>Accuracy<br>(testdev balanced)</td> <td>65.61</td> <td>67.03</td> </tr> <tr> <td> <a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer) </td> <td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td> <td>58.37</td> <td>59.07</td> </tr> <tr> <td> <a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev) </td> <td>Accuracy (test)</td> <td>90.02</td> <td>88.93</td> </tr> <tr> <td> <a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer) </td> <td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td> <td>80.57</td> <td>76.78</td> </tr> <tr> <td> <a href="https://allenai.org/data/diagrams">AI2D</a><br>(train) </td> <td>Accuracy (test)</td> <td>72.12</td> <td>73.28</td> </tr> <tr> <td> <a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val) </td> <td>Accuracy (test)</td> <td>95.39</td> <td>95.93</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test)</td> <td>92.65</td> <td>93.11</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test/test2)</td> <td> 92.61<br> 90.58 </td> <td> 92.79<br> 90.54 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val) </td> <td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td> <td>57.08</td> <td>71.36</td> </tr> <tr> <td> <a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td> 73.7 </td> <td> 75.52 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train) </td> <td>Accuracy<br>(test_simple/<br>test_complex)</td> <td> 81.72<br> 69.56 </td> <td> 84.86<br> 72.27 </td> </tr> <tr> <td> <a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val) </td> <td>Accuracy (test)</td> <td>72.32</td> <td>74.61</td> <td>74.93</td> </tr> <tr> <td> <a href="https://textvqa.org/">TextVQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td>55.47</td> <td>73.15</td> <td>76.48</td> </tr> <tr> <td> <a href="https://www.docvqa.org/">DocVQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>43.74</td> <td>78.02</td> <td>84.77</td> </tr> <tr> <td> <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>28.46</td> <td>40.47</td> <td>47.75</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>63.29</td> <td>81.82</td> <td>84.40</td> </tr> <tr> <th>Segmentation</th> </tr> <tr> <td> <a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images) </td> <td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td> <td> 73.40<br> 68.32<br> 67.65 </td> <td> 75.57<br> 69.76<br> 70.17 </td> <td> 76.94<br> 72.18<br> 72.22 </td> </tr> <tr> <th>Video tasks (Caption/QA)</th> </tr> <tr> <td>MSR-VTT (Captioning)</td> <td>CIDEr (test)</td> <td>70.54</td> </tr> <tr> <td>MSR-VTT (QA)</td> <td>Accuracy (test)</td> <td>50.09</td> </tr> <tr> <td>ActivityNet (Captioning)</td> <td>CIDEr (test)</td> <td>34.62</td> </tr> <tr> <td>ActivityNet (QA)</td> <td>Accuracy (test)</td> <td>50.78</td> </tr> <tr> <td>VATEX (Captioning)</td> <td>CIDEr (test)</td> <td>79.73</td> </tr> <tr> <td>MSVD (QA)</td> <td>Accuracy (test)</td> <td>60.22</td> </tr> </tbody></table> ## Ethics and safety ### Evaluation approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tbody><tr> </tr></tbody><tbody><tr><th>Metric</th> <th>Perceived<br>gender</th> <th></th> <th>Ethnicity</th> <th></th> <th>Age group</th> <th></th> </tr> <tr> <th></th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> </tr> <tr> <td>Toxicity</td> <td>0.04%</td> <td>0.03%</td> <td>0.08%</td> <td>0.00%</td> <td>0.09%</td> <td>0.00%</td> </tr> <tr> <td>Identity Attack</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.06%</td> <td>0.04%</td> <td>0.09%</td> <td>0.07%</td> <td>0.16%</td> <td>0.00%</td> </tr> <tr> <td>Threat</td> <td>0.06%</td> <td>0.05%</td> <td>0.14%</td> <td>0.05%</td> <td>0.17%</td> <td>0.00%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </tbody></table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma model still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that. * PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input. ## Citation ```bibtex @article{beyer2024paligemma, title={{PaliGemma: A versatile 3B VLM for transfer}}, author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*}, year={2024}, journal={arXiv preprint arXiv:2407.07726} } ``` Find the paper [here](https://arxiv.org/abs/2407.07726).
null
Non_BioNLP
# PaliGemma model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) JAX/FLAX PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://arxiv.org/abs/1905.13648">SceneText_VQA</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/stvqa.py">big_vision</a>. **Resources and technical documentation:** * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma) * [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363) **Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-stvqa-448-jax) **Authors:** Google ## Model information ### Model summary #### Description PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma language model](https://arxiv.org/abs/2403.08295). It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion params. The text decoder is initialized from [Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). PaliGemma is trained following the PaLI-3 recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. ### Model data #### Pre-train datasets PaliGemma is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma on clean data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## Implementation information ### Hardware PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### Mix model (fine-tune on mixture of transfer tasks) <table> <tbody><tr> <th>Benchmark</th> <th>Metric (split)</th> <th>mix-224</th> <th>mix-448</th> </tr> <tr> <td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td> <td>Paired Accuracy</td> <td>46.00</td> <td>45.33</td> </tr> <tr> <td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td> <td>Accuracy<br>(random/popular/adversarial)</td> <td> 88.00<br> 86.63<br> 85.67 </td> <td> 89.37<br> 88.40<br> 87.47 </td> </tr> <tr> <td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td> <td>Accuracy (test)</td> <td>65.20</td> <td>65.47</td> </tr> </tbody></table> #### Single task (fine-tune on single task) <table> <tbody><tr> <th>Benchmark<br>(train split)</th> <th>Metric<br>(split)</th> <th>pt-224</th> <th>pt-448</th> <th>pt-896</th> </tr> <tr> <th>Captioning</th> </tr> <tr> <td> <a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval) </td> <td>CIDEr (val)</td> <td>141.92</td> <td>144.60</td> </tr> <tr> <td> <a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer) </td> <td>CIDEr (val)</td> <td>121.72</td> <td>123.58</td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 139.2<br> 115.8<br> 116.4 </td> <td> 141.2<br> 118.0<br> 118.6 </td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 78.1<br> 41.3<br> 42.4 </td> <td> 80.0<br> 41.9<br> 42.9 </td> </tr> <tr> <td> <a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train) </td> <td>CIDEr (val)</td> <td>127.48</td> <td>153.94</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val) </td> <td>CIDEr/BLEU-4<br>(test)</td> <td> 162.25<br> 0.192<br> </td> <td> 181.49<br> 0.211<br> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>117.57</td> <td>119.59</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>136.07</td> <td>148.36</td> </tr> <tr> <th>Question answering</th> </tr> <tr> <td> <a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation) </td> <td>Accuracy<br>(Test server - std)</td> <td>83.19</td> <td>85.64</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer) </td> <td>Paired Accuracy</td> <td>47.33</td> <td>45.33</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer) </td> <td>Accuracy<br>(random/popular/<br>adversarial)</td> <td> 87.80<br> 85.87<br> 84.27 </td> <td> 88.23<br> 86.77<br> 85.90 </td> </tr> <tr> <td> <a href="https://okvqa.allenai.org/">OKVQA</a><br>(train) </td> <td>Accuracy (val)</td> <td>63.54</td> <td>63.15</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>76.37</td> <td>76.90</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>61.85</td> <td>63.22</td> </tr> <tr> <td> <a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced) </td> <td>Accuracy<br>(testdev balanced)</td> <td>65.61</td> <td>67.03</td> </tr> <tr> <td> <a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer) </td> <td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td> <td>58.37</td> <td>59.07</td> </tr> <tr> <td> <a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev) </td> <td>Accuracy (test)</td> <td>90.02</td> <td>88.93</td> </tr> <tr> <td> <a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer) </td> <td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td> <td>80.57</td> <td>76.78</td> </tr> <tr> <td> <a href="https://allenai.org/data/diagrams">AI2D</a><br>(train) </td> <td>Accuracy (test)</td> <td>72.12</td> <td>73.28</td> </tr> <tr> <td> <a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val) </td> <td>Accuracy (test)</td> <td>95.39</td> <td>95.93</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test)</td> <td>92.65</td> <td>93.11</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test/test2)</td> <td> 92.61<br> 90.58 </td> <td> 92.79<br> 90.54 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val) </td> <td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td> <td>57.08</td> <td>71.36</td> </tr> <tr> <td> <a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td> 73.7 </td> <td> 75.52 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train) </td> <td>Accuracy<br>(test_simple/<br>test_complex)</td> <td> 81.72<br> 69.56 </td> <td> 84.86<br> 72.27 </td> </tr> <tr> <td> <a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val) </td> <td>Accuracy (test)</td> <td>72.32</td> <td>74.61</td> <td>74.93</td> </tr> <tr> <td> <a href="https://textvqa.org/">TextVQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td>55.47</td> <td>73.15</td> <td>76.48</td> </tr> <tr> <td> <a href="https://www.docvqa.org/">DocVQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>43.74</td> <td>78.02</td> <td>84.77</td> </tr> <tr> <td> <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>28.46</td> <td>40.47</td> <td>47.75</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>63.29</td> <td>81.82</td> <td>84.40</td> </tr> <tr> <th>Segmentation</th> </tr> <tr> <td> <a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images) </td> <td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td> <td> 73.40<br> 68.32<br> 67.65 </td> <td> 75.57<br> 69.76<br> 70.17 </td> <td> 76.94<br> 72.18<br> 72.22 </td> </tr> <tr> <th>Video tasks (Caption/QA)</th> </tr> <tr> <td>MSR-VTT (Captioning)</td> <td>CIDEr (test)</td> <td>70.54</td> </tr> <tr> <td>MSR-VTT (QA)</td> <td>Accuracy (test)</td> <td>50.09</td> </tr> <tr> <td>ActivityNet (Captioning)</td> <td>CIDEr (test)</td> <td>34.62</td> </tr> <tr> <td>ActivityNet (QA)</td> <td>Accuracy (test)</td> <td>50.78</td> </tr> <tr> <td>VATEX (Captioning)</td> <td>CIDEr (test)</td> <td>79.73</td> </tr> <tr> <td>MSVD (QA)</td> <td>Accuracy (test)</td> <td>60.22</td> </tr> </tbody></table> ## Ethics and safety ### Evaluation approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tbody><tr> </tr></tbody><tbody><tr><th>Metric</th> <th>Perceived<br>gender</th> <th></th> <th>Ethnicity</th> <th></th> <th>Age group</th> <th></th> </tr> <tr> <th></th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> </tr> <tr> <td>Toxicity</td> <td>0.04%</td> <td>0.03%</td> <td>0.08%</td> <td>0.00%</td> <td>0.09%</td> <td>0.00%</td> </tr> <tr> <td>Identity Attack</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.06%</td> <td>0.04%</td> <td>0.09%</td> <td>0.07%</td> <td>0.16%</td> <td>0.00%</td> </tr> <tr> <td>Threat</td> <td>0.06%</td> <td>0.05%</td> <td>0.14%</td> <td>0.05%</td> <td>0.17%</td> <td>0.00%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </tbody></table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma model still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that. * PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input. ## Citation ```bibtex @article{beyer2024paligemma, title={{PaliGemma: A versatile 3B VLM for transfer}}, author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*}, year={2024}, journal={arXiv preprint arXiv:2407.07726} } ``` Find the paper [here](https://arxiv.org/abs/2407.07726).
{"library_name": "big_vision", "license": "gemma", "pipeline_tag": "image-text-to-text", "tags": ["paligemma", "jax"], "extra_gated_heading": "Access PaliGemma on Hugging Face", "extra_gated_prompt": "To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
43,028
cardiffnlp/twitter-roberta-base-2021-124m-offensive
cardiffnlp
text-classification
[ "transformers", "pytorch", "roberta", "text-classification", "dataset:tweet_eval", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-11-29T21:03:19Z
2022-12-02T17:51:50+00:00
11
0
--- datasets: - tweet_eval metrics: - f1 - accuracy pipeline_tag: text-classification widget: - text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}} example_title: topic_classification 1 - text: Yes, including Medicare and social security saving👍 example_title: sentiment 1 - text: All two of them taste like ass. example_title: offensive 1 - text: If you wanna look like a badass, have drama on social media example_title: irony 1 - text: Whoever just unfollowed me you a bitch example_title: hate 1 - text: I love swimming for the same reason I love meditating...the feeling of weightlessness. example_title: emotion 1 - text: Beautiful sunset last night from the pontoon @TupperLakeNY example_title: emoji 1 model-index: - name: cardiffnlp/twitter-roberta-base-2021-124m-offensive results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: offensive split: test metrics: - type: micro_f1_tweet_eval/offensive value: 0.858139534883721 name: Micro F1 (tweet_eval/offensive) - type: micro_f1_tweet_eval/offensive value: 0.8232706055154664 name: Macro F1 (tweet_eval/offensive) - type: accuracy_tweet_eval/offensive value: 0.858139534883721 name: Accuracy (tweet_eval/offensive) --- # cardiffnlp/twitter-roberta-base-2021-124m-offensive This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the [`tweet_eval (offensive)`](https://huggingface.co/datasets/tweet_eval) via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp). Training split is `train` and parameters have been tuned on the validation split `validation`. Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m-offensive/raw/main/metric.json)). - F1 (micro): 0.858139534883721 - F1 (macro): 0.8232706055154664 - Accuracy: 0.858139534883721 ### Usage Install tweetnlp via pip. ```shell pip install tweetnlp ``` Load the model in python. ```python import tweetnlp model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-2021-124m-offensive", max_length=128) model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}') ``` ### Reference ``` @inproceedings{camacho-collados-etal-2022-tweetnlp, title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia}, author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others}, author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
null
Non_BioNLP
# cardiffnlp/twitter-roberta-base-2021-124m-offensive This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the [`tweet_eval (offensive)`](https://huggingface.co/datasets/tweet_eval) via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp). Training split is `train` and parameters have been tuned on the validation split `validation`. Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m-offensive/raw/main/metric.json)). - F1 (micro): 0.858139534883721 - F1 (macro): 0.8232706055154664 - Accuracy: 0.858139534883721 ### Usage Install tweetnlp via pip. ```shell pip install tweetnlp ``` Load the model in python. ```python import tweetnlp model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-2021-124m-offensive", max_length=128) model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}') ``` ### Reference ``` @inproceedings{camacho-collados-etal-2022-tweetnlp, title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia}, author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others}, author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
{"datasets": ["tweet_eval"], "metrics": ["f1", "accuracy"], "pipeline_tag": "text-classification", "widget": [{"text": "Get the all-analog Classic Vinyl Edition of \"Takin Off\" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}", "example_title": "topic_classification 1"}, {"text": "Yes, including Medicare and social security saving👍", "example_title": "sentiment 1"}, {"text": "All two of them taste like ass.", "example_title": "offensive 1"}, {"text": "If you wanna look like a badass, have drama on social media", "example_title": "irony 1"}, {"text": "Whoever just unfollowed me you a bitch", "example_title": "hate 1"}, {"text": "I love swimming for the same reason I love meditating...the feeling of weightlessness.", "example_title": "emotion 1"}, {"text": "Beautiful sunset last night from the pontoon @TupperLakeNY", "example_title": "emoji 1"}], "model-index": [{"name": "cardiffnlp/twitter-roberta-base-2021-124m-offensive", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "offensive", "split": "test"}, "metrics": [{"type": "micro_f1_tweet_eval/offensive", "value": 0.858139534883721, "name": "Micro F1 (tweet_eval/offensive)"}, {"type": "micro_f1_tweet_eval/offensive", "value": 0.8232706055154664, "name": "Macro F1 (tweet_eval/offensive)"}, {"type": "accuracy_tweet_eval/offensive", "value": 0.858139534883721, "name": "Accuracy (tweet_eval/offensive)"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,029
TransferGraph/phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-glue_cola
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:glue", "base_model:phailyoor/distilbert-base-uncased-finetuned-yahd", "base_model:adapter:phailyoor/distilbert-base-uncased-finetuned-yahd", "license:apache-2.0", "model-index", "region:us" ]
2024-02-29T13:53:33Z
2024-02-29T13:53:38+00:00
0
0
--- base_model: phailyoor/distilbert-base-uncased-finetuned-yahd datasets: - glue library_name: peft license: apache-2.0 metrics: - accuracy tags: - parquet - text-classification model-index: - name: phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-glue_cola results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - type: accuracy value: 0.716203259827421 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-glue_cola This model is a fine-tuned version of [phailyoor/distilbert-base-uncased-finetuned-yahd](https://huggingface.co/phailyoor/distilbert-base-uncased-finetuned-yahd) on the glue dataset. It achieves the following results on the evaluation set: - accuracy: 0.7162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.5360 | None | 0 | | 0.6817 | 0.6382 | 0 | | 0.7124 | 0.5869 | 1 | | 0.7172 | 0.5518 | 2 | | 0.7162 | 0.5340 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-glue_cola This model is a fine-tuned version of [phailyoor/distilbert-base-uncased-finetuned-yahd](https://huggingface.co/phailyoor/distilbert-base-uncased-finetuned-yahd) on the glue dataset. It achieves the following results on the evaluation set: - accuracy: 0.7162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.5360 | None | 0 | | 0.6817 | 0.6382 | 0 | | 0.7124 | 0.5869 | 1 | | 0.7172 | 0.5518 | 2 | | 0.7162 | 0.5340 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"base_model": "phailyoor/distilbert-base-uncased-finetuned-yahd", "datasets": ["glue"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-glue_cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "accuracy", "value": 0.716203259827421, "name": "accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,030
TransferGraph/cambridgeltl_guardian_news_distilbert-base-uncased-finetuned-lora-tweet_eval_emotion
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:cambridgeltl/guardian_news_distilbert-base-uncased", "base_model:adapter:cambridgeltl/guardian_news_distilbert-base-uncased", "model-index", "region:us" ]
2024-02-29T12:48:31Z
2024-02-29T12:48:33+00:00
0
0
--- base_model: cambridgeltl/guardian_news_distilbert-base-uncased datasets: - tweet_eval library_name: peft metrics: - accuracy tags: - parquet - text-classification model-index: - name: cambridgeltl_guardian_news_distilbert-base-uncased-finetuned-lora-tweet_eval_emotion results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - type: accuracy value: 0.7005347593582888 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cambridgeltl_guardian_news_distilbert-base-uncased-finetuned-lora-tweet_eval_emotion This model is a fine-tuned version of [cambridgeltl/guardian_news_distilbert-base-uncased](https://huggingface.co/cambridgeltl/guardian_news_distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.7005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.1551 | None | 0 | | 0.4439 | 1.2631 | 0 | | 0.6150 | 1.1153 | 1 | | 0.6898 | 0.8831 | 2 | | 0.7005 | 0.7802 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cambridgeltl_guardian_news_distilbert-base-uncased-finetuned-lora-tweet_eval_emotion This model is a fine-tuned version of [cambridgeltl/guardian_news_distilbert-base-uncased](https://huggingface.co/cambridgeltl/guardian_news_distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.7005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.1551 | None | 0 | | 0.4439 | 1.2631 | 0 | | 0.6150 | 1.1153 | 1 | | 0.6898 | 0.8831 | 2 | | 0.7005 | 0.7802 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"base_model": "cambridgeltl/guardian_news_distilbert-base-uncased", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "cambridgeltl_guardian_news_distilbert-base-uncased-finetuned-lora-tweet_eval_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "validation", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.7005347593582888, "name": "accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,031
google/gemma-3-1b-pt
google
text-generation
[ "transformers", "safetensors", "gemma3_text", "text-generation", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:1903.00161", "arxiv:2009.03300", "arxiv:2304.06364", "arxiv:2103.03874", "arxiv:2110.14168", "arxiv:2311.12022", "arxiv:2108.07732", "arxiv:2107.03374", "arxiv:2210.03057", "arxiv:2106.03193", "arxiv:1910.11856", "arxiv:2502.12404", "arxiv:2502.21228", "arxiv:2404.16816", "arxiv:2104.12756", "arxiv:2311.16502", "arxiv:2203.10244", "arxiv:2404.12390", "arxiv:1810.12440", "arxiv:1908.02660", "arxiv:2312.11805", "license:gemma", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-20T21:21:06Z
2025-03-21T16:14:00+00:00
20,025
73
--- library_name: transformers license: gemma pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0. ```sh $ pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your use case. #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline("text-generation", model="google/gemma-3-1b-pt", device="cuda", torch_dtype=torch.bfloat16) output = pipe("Eiffel tower is located in", max_new_tokens=50) ``` #### Running the model on a single / multi GPU ```python import torch from transformers import AutoTokenizer, Gemma3ForCausalLM ckpt = "google/gemma-3-1b-pt" tokenizer = AutoTokenizer.from_pretrained(ckpt) model = Gemma3ForCausalLM.from_pretrained( ckpt, torch_dtype=torch.bfloat16, device_map="auto" ) prompt = "Eiffel tower is located in" model_inputs = tokenizer(prompt, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=50, do_sample=False) generation = generation[0][input_len:] decoded = tokenizer.decode(generation, skip_special_tokens=True) print(decoded) ``` ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context of 8192 tokens ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://goo.gle/Gemma3Report}, publisher={Kaggle}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and 1B with 2 trillion tokens. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: #### Reasoning and factuality | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 #### STEM and code | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 #### Multilingual | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 #### Multimodal | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://goo.gle/Gemma3Report [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
null
Non_BioNLP
# Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0. ```sh $ pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your use case. #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline("text-generation", model="google/gemma-3-1b-pt", device="cuda", torch_dtype=torch.bfloat16) output = pipe("Eiffel tower is located in", max_new_tokens=50) ``` #### Running the model on a single / multi GPU ```python import torch from transformers import AutoTokenizer, Gemma3ForCausalLM ckpt = "google/gemma-3-1b-pt" tokenizer = AutoTokenizer.from_pretrained(ckpt) model = Gemma3ForCausalLM.from_pretrained( ckpt, torch_dtype=torch.bfloat16, device_map="auto" ) prompt = "Eiffel tower is located in" model_inputs = tokenizer(prompt, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=50, do_sample=False) generation = generation[0][input_len:] decoded = tokenizer.decode(generation, skip_special_tokens=True) print(decoded) ``` ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context of 8192 tokens ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://goo.gle/Gemma3Report}, publisher={Kaggle}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and 1B with 2 trillion tokens. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: #### Reasoning and factuality | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 #### STEM and code | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 #### Multilingual | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 #### Multimodal | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://goo.gle/Gemma3Report [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
43,032
sinhala-nlp/NSINA-Category-xlmr-base
sinhala-nlp
text-classification
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "si", "dataset:sinhala-nlp/NSINA", "dataset:sinhala-nlp/NSINA-Categories", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-03-19T13:11:57Z
2024-03-19T13:16:30+00:00
9
0
--- datasets: - sinhala-nlp/NSINA - sinhala-nlp/NSINA-Categories language: - si license: cc-by-sa-4.0 --- # Sinhala News Category Prediction This is a text classification task created with the [NSINA dataset](https://github.com/Sinhala-NLP/NSINA). This dataset is also released with the same license as NSINA. Given the news content, the ML models should predict a pre-defined category for the news. ## Data First, for this task, we dropped all the news articles in NSINA 1.0 without a category as some news sources prefer not to categorise them. Next, we identified the top 100 news categories from the available news instances. We grouped possible categories into four main categories: local news, international news, sports news, and business news. To avoid bias, we undersampled the dataset. We only used 10,000 instances from each category, and for the ``Business" category, we used the original number of instances which was 8777 articles. We divided this dataset into a training and test set following a 0.8 split Data can be loaded into pandas dataframes using the following code. ```python from datasets import Dataset from datasets import load_dataset train = Dataset.to_pandas(load_dataset('sinhala-nlp/NSINA-Categories', split='train')) test = Dataset.to_pandas(load_dataset('sinhala-nlp/NSINA-Categories', split='test')) ``` ## Citation If you are using the dataset or the models, please cite the following paper. ~~~ @inproceedings{Nsina2024, author={Hettiarachchi, Hansi and Premasiri, Damith and Uyangodage, Lasitha and Ranasinghe, Tharindu}, title={{NSINA: A News Corpus for Sinhala}}, booktitle={The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, year={2024}, month={May}, } ~~~
null
Non_BioNLP
# Sinhala News Category Prediction This is a text classification task created with the [NSINA dataset](https://github.com/Sinhala-NLP/NSINA). This dataset is also released with the same license as NSINA. Given the news content, the ML models should predict a pre-defined category for the news. ## Data First, for this task, we dropped all the news articles in NSINA 1.0 without a category as some news sources prefer not to categorise them. Next, we identified the top 100 news categories from the available news instances. We grouped possible categories into four main categories: local news, international news, sports news, and business news. To avoid bias, we undersampled the dataset. We only used 10,000 instances from each category, and for the ``Business" category, we used the original number of instances which was 8777 articles. We divided this dataset into a training and test set following a 0.8 split Data can be loaded into pandas dataframes using the following code. ```python from datasets import Dataset from datasets import load_dataset train = Dataset.to_pandas(load_dataset('sinhala-nlp/NSINA-Categories', split='train')) test = Dataset.to_pandas(load_dataset('sinhala-nlp/NSINA-Categories', split='test')) ``` ## Citation If you are using the dataset or the models, please cite the following paper. ~~~ @inproceedings{Nsina2024, author={Hettiarachchi, Hansi and Premasiri, Damith and Uyangodage, Lasitha and Ranasinghe, Tharindu}, title={{NSINA: A News Corpus for Sinhala}}, booktitle={The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, year={2024}, month={May}, } ~~~
{"datasets": ["sinhala-nlp/NSINA", "sinhala-nlp/NSINA-Categories"], "language": ["si"], "license": "cc-by-sa-4.0"}
task
[ "TEXT_CLASSIFICATION" ]
43,033
p1atdev/mt5-base-xlsum-ja-v1.1
p1atdev
summarization
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "summarization", "ja", "dataset:csebuetnlp/xlsum", "base_model:google/mt5-base", "base_model:finetune:google/mt5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-08T09:08:49Z
2023-10-08T13:02:03+00:00
21
2
--- base_model: google/mt5-base datasets: - csebuetnlp/xlsum language: - ja license: apache-2.0 metrics: - rouge tags: - generated_from_trainer - mt5 - summarization model-index: - name: mt5-base-xlsum-ja-v1.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-xlsum-ja-v1.1 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the xlsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-xlsum-ja-v1.1 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the xlsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
{"base_model": "google/mt5-base", "datasets": ["csebuetnlp/xlsum"], "language": ["ja"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer", "mt5", "summarization"], "model-index": [{"name": "mt5-base-xlsum-ja-v1.1", "results": []}]}
task
[ "SUMMARIZATION" ]
43,034
akerkar/mt5-small-finetuned-amazon-en-es
akerkar
null
[ "tensorboard", "safetensors", "mt5", "region:us" ]
2024-12-03T00:09:19Z
2024-12-05T04:22:00+00:00
5
0
--- {} --- ## Model Card: MT5-Small-Finetuned-Amazon-EN-ES # Overview this model is based on Google's MT5-small architecture and is fine-tuned for summarizing customer reviews from the Amazon dataset in both English and Spanish. The model was fine-tuned on the "amazon_review_multi" dataset, which includes multilingual product reviews. Special focus was given to English (en) and Spanish (es) reviews, particularly in the "Books" category. # Training Details Base Model: Google's MT5-small Training Framework: Hugging Face Transformers library. Fine-Tuning Steps: Tokenized the review text and titles. Used ROUGE scores to evaluate summarization quality during training. Trained for multiple epochs with a batch size of 8. # Perform The model performs well on short to medium-length reviews but may struggle with very brief less than 10 words or overly lengthy more than 512 words inputs. # How to Use To use the model for summarization, load it with the Hugging Face pipeline: * summarizer = pipeline("summarization", model="akerkar/mt5-small-finetuned-amazon-en-es") * text = "This book is an amazing read, with great characters and a thrilling plot." * summary = summarizer(text, max_length=20, min_length=5)[0]['summary_text'] * print(summary)
null
Non_BioNLP
## Model Card: MT5-Small-Finetuned-Amazon-EN-ES # Overview this model is based on Google's MT5-small architecture and is fine-tuned for summarizing customer reviews from the Amazon dataset in both English and Spanish. The model was fine-tuned on the "amazon_review_multi" dataset, which includes multilingual product reviews. Special focus was given to English (en) and Spanish (es) reviews, particularly in the "Books" category. # Training Details Base Model: Google's MT5-small Training Framework: Hugging Face Transformers library. Fine-Tuning Steps: Tokenized the review text and titles. Used ROUGE scores to evaluate summarization quality during training. Trained for multiple epochs with a batch size of 8. # Perform The model performs well on short to medium-length reviews but may struggle with very brief less than 10 words or overly lengthy more than 512 words inputs. # How to Use To use the model for summarization, load it with the Hugging Face pipeline: * summarizer = pipeline("summarization", model="akerkar/mt5-small-finetuned-amazon-en-es") * text = "This book is an amazing read, with great characters and a thrilling plot." * summary = summarizer(text, max_length=20, min_length=5)[0]['summary_text'] * print(summary)
{}
task
[ "SUMMARIZATION" ]
43,035
naver/pisco-mistral
naver
null
[ "COCOM", "custom_code", "en", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:cc-by-4.0", "region:us" ]
2024-12-21T23:09:17Z
2025-03-13T13:05:18+00:00
153
3
--- base_model: - mistralai/Mistral-7B-Instruct-v0.2 language: - en license: cc-by-4.0 --- # Model Card for PISCO-mistral PISCO is a context compression model to be used for efficient inference when doing Retrieval Augmented Generation (RAG), particularly optimized for question answering. PISCO contains two adapters around a backbone LLM: - An encoder adapter trained to perform compression of input contexts (the retrieved documents in RAG) into a set of 8 embedding vectors - A decoder adapter, which can take as input sets of embeddings vectors from documents and a query and provide an answer With a compressed collection of documents to retrieve from, inference becomes about x5 faster. PISCO models have very small loss in accuracy on a wide set of QA benchmarks (0-3%). *Developed by*: Naver Labs Europe *License*: [CC BY-NC 4.0.](https://creativecommons.org/licenses/by-nc/4.0/) * *Model*: `Pisco-mistral` * *Backbone model*: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * *Model size*: 7.33 billion parameters * *Compression rate*: x16: each document (of size up to 128 tokens) is converted into 8 embedding vectors. ## Usage ```python from transformers import AutoModel pisco = AutoModel.from_pretrained('naver/pisco-mistral').to('cuda') # Example documents and question: documents = [ [ "Weldenia is a monotypic genus of flowering plant in the family Commelinaceae, first describ ed in 1829. It has one single species: Weldenia candida, which grows originally in Mexico and Guatemala.", "Hagsatera is a genus of flowering plants from the orchid family, Orchidaceae. There are two known species, native to Mexico and Guatemala", "Alsobia is a genus of flowering plants in the family Gesneriaceae, native to Mexico, Guatemala and Costa Rica. The two species are succulent, stoloniferous herbs and were previously included in the genus \"Episcia\". Recent molecular studies have supported the separation of \"Alsobia\" from \"Episcia\"" ] ] questions = ["Which genus of plant grows originally in Mexico and Guatemala, Phylica or Weldenia?"] # End-to-end usage out = pisco.generate_from_text(questions=questions, documents=documents, max_new_tokens=64) print('Generated answer', out) # Document compression: embeddings = pisco.compress_documents(documents=documents[0]) # Generation from compressed documents: out = pisco.generate_from_compressed_documents_and_questions(questions=questions, compressed_documents=embeddings) ``` The recommended usage is to provide documents cropped to about 128 tokens, which is common practice when doing RAG. ## Model features * **PISCO enables high accuracy responses from the compressed documents** * **PISCO is robust to various domains** We tested its compression/decoding abilities on various sets of data. * **PISCO enables x5 faster generation** when the collection documents to retrieve from is pre-compressed. ## License This work is licensed under CC BY-NC 4.0. ## Cite ``` TODO ``` ## Acknowledgements Model trained at [Naver Labs Europe](https://europe.naverlabs.com/) Team: * [Maxime LOUIS](https://europe.naverlabs.com/people_user_naverlabs/maxime-louis/) * [Hervé Dejean](https://europe.naverlabs.com/people_user_naverlabs/herve-dejean/) * [Stéphane Clinchant](https://europe.naverlabs.com/people_user_naverlabs/st%C3%A9phane-clinchant/)
null
Non_BioNLP
# Model Card for PISCO-mistral PISCO is a context compression model to be used for efficient inference when doing Retrieval Augmented Generation (RAG), particularly optimized for question answering. PISCO contains two adapters around a backbone LLM: - An encoder adapter trained to perform compression of input contexts (the retrieved documents in RAG) into a set of 8 embedding vectors - A decoder adapter, which can take as input sets of embeddings vectors from documents and a query and provide an answer With a compressed collection of documents to retrieve from, inference becomes about x5 faster. PISCO models have very small loss in accuracy on a wide set of QA benchmarks (0-3%). *Developed by*: Naver Labs Europe *License*: [CC BY-NC 4.0.](https://creativecommons.org/licenses/by-nc/4.0/) * *Model*: `Pisco-mistral` * *Backbone model*: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * *Model size*: 7.33 billion parameters * *Compression rate*: x16: each document (of size up to 128 tokens) is converted into 8 embedding vectors. ## Usage ```python from transformers import AutoModel pisco = AutoModel.from_pretrained('naver/pisco-mistral').to('cuda') # Example documents and question: documents = [ [ "Weldenia is a monotypic genus of flowering plant in the family Commelinaceae, first describ ed in 1829. It has one single species: Weldenia candida, which grows originally in Mexico and Guatemala.", "Hagsatera is a genus of flowering plants from the orchid family, Orchidaceae. There are two known species, native to Mexico and Guatemala", "Alsobia is a genus of flowering plants in the family Gesneriaceae, native to Mexico, Guatemala and Costa Rica. The two species are succulent, stoloniferous herbs and were previously included in the genus \"Episcia\". Recent molecular studies have supported the separation of \"Alsobia\" from \"Episcia\"" ] ] questions = ["Which genus of plant grows originally in Mexico and Guatemala, Phylica or Weldenia?"] # End-to-end usage out = pisco.generate_from_text(questions=questions, documents=documents, max_new_tokens=64) print('Generated answer', out) # Document compression: embeddings = pisco.compress_documents(documents=documents[0]) # Generation from compressed documents: out = pisco.generate_from_compressed_documents_and_questions(questions=questions, compressed_documents=embeddings) ``` The recommended usage is to provide documents cropped to about 128 tokens, which is common practice when doing RAG. ## Model features * **PISCO enables high accuracy responses from the compressed documents** * **PISCO is robust to various domains** We tested its compression/decoding abilities on various sets of data. * **PISCO enables x5 faster generation** when the collection documents to retrieve from is pre-compressed. ## License This work is licensed under CC BY-NC 4.0. ## Cite ``` TODO ``` ## Acknowledgements Model trained at [Naver Labs Europe](https://europe.naverlabs.com/) Team: * [Maxime LOUIS](https://europe.naverlabs.com/people_user_naverlabs/maxime-louis/) * [Hervé Dejean](https://europe.naverlabs.com/people_user_naverlabs/herve-dejean/) * [Stéphane Clinchant](https://europe.naverlabs.com/people_user_naverlabs/st%C3%A9phane-clinchant/)
{"base_model": ["mistralai/Mistral-7B-Instruct-v0.2"], "language": ["en"], "license": "cc-by-4.0"}
task
[ "QUESTION_ANSWERING" ]
43,036
asafaya/kanarya-750m
asafaya
text-generation
[ "transformers", "pytorch", "jax", "safetensors", "gptj", "text-generation", "tr", "dataset:oscar", "dataset:mc4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-01-17T11:24:49Z
2024-03-17T14:10:37+00:00
3,063
9
--- datasets: - oscar - mc4 language: - tr license: apache-2.0 pipeline_tag: text-generation widget: - text: 'Benim adım Zeynep, ve en sevdiğim kitabın adı:' example_title: Benim adım Zeynep - text: Bugünkü yemeğimiz example_title: Bugünkü yemeğimiz --- # Kanarya-750M: Turkish Language Model <img src="https://asafaya.me/images/kanarya.webp" alt="Kanarya Logo" style="width:600px;"/> **Kanarya** is a pre-trained Turkish GPT-J 750M model. Released as part of [Turkish Data Depository](https://tdd.ai/) efforts, the Kanarya family has two versions (Kanarya-2B, Kanarya-0.7B). Kanarya-2B is the larger version and Kanarya-0.7B is the smaller version. Both models are trained on a large-scale Turkish text corpus, filtered from OSCAR and mC4 datasets. The training data is collected from various sources, including news, articles, and websites, to create a diverse and high-quality dataset. The models are trained using a JAX/Flax implementation of the [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax) architecture. The models are only pre-trained and are intended to be fine-tuned on a wide range of Turkish NLP tasks. ## Model Details - Model Name: Kanarya-750M - Model Size: 750M parameters - Training Data: OSCAR, mC4 - Language: Turkish - Layers: 12 - Hidden Size: 2048 - Number of Heads: 16 - Context Size: 2048 - Positional Embeddings: Rotary - Vocabulary Size: 32,768 ## Intended Use This model is only pre-trained on Turkish text data and is intended to be fine-tuned on a wide range of Turkish NLP tasks. The model can be used for various Turkish NLP tasks, including text generation, translation, summarization, and other Turkish NLP tasks. This model is not intended to be used for any downstream tasks without fine-tuning. ## Limitations and Ethical Considerations The model is trained on a diverse and high-quality Turkish text corpus, but it may still generate toxic, biased, or unethical content. It is highly recommended to use the model responsibly and make sure that the generated content is appropriate for the use case. Please use the model responsibly and report any issues. ## License: Apache 2.0 The model is licensed under the Apache 2.0 License. It is free to use for any purpose, including commercial use. We encourage users to contribute to the model and report any issues. However, the model is provided "as is" without warranty of any kind. ## Citation If you use the model, please cite the following paper: ```bibtex @inproceedings{safaya-etal-2022-mukayese, title = "Mukayese: {T}urkish {NLP} Strikes Back", author = "Safaya, Ali and Kurtulu{\c{s}}, Emirhan and Goktogan, Arda and Yuret, Deniz", editor = "Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.69", doi = "10.18653/v1/2022.findings-acl.69", pages = "846--863", } ``` ## Acknowledgments During this work, Ali Safaya was supported by [KUIS AI Center](https://ai.ku.edu.tr/) fellowship. Moreover, the pre-training of these models were performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center ([TRUBA](https://www.truba.gov.tr/index.php/en/main-page/) resources).
null
Non_BioNLP
# Kanarya-750M: Turkish Language Model <img src="https://asafaya.me/images/kanarya.webp" alt="Kanarya Logo" style="width:600px;"/> **Kanarya** is a pre-trained Turkish GPT-J 750M model. Released as part of [Turkish Data Depository](https://tdd.ai/) efforts, the Kanarya family has two versions (Kanarya-2B, Kanarya-0.7B). Kanarya-2B is the larger version and Kanarya-0.7B is the smaller version. Both models are trained on a large-scale Turkish text corpus, filtered from OSCAR and mC4 datasets. The training data is collected from various sources, including news, articles, and websites, to create a diverse and high-quality dataset. The models are trained using a JAX/Flax implementation of the [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax) architecture. The models are only pre-trained and are intended to be fine-tuned on a wide range of Turkish NLP tasks. ## Model Details - Model Name: Kanarya-750M - Model Size: 750M parameters - Training Data: OSCAR, mC4 - Language: Turkish - Layers: 12 - Hidden Size: 2048 - Number of Heads: 16 - Context Size: 2048 - Positional Embeddings: Rotary - Vocabulary Size: 32,768 ## Intended Use This model is only pre-trained on Turkish text data and is intended to be fine-tuned on a wide range of Turkish NLP tasks. The model can be used for various Turkish NLP tasks, including text generation, translation, summarization, and other Turkish NLP tasks. This model is not intended to be used for any downstream tasks without fine-tuning. ## Limitations and Ethical Considerations The model is trained on a diverse and high-quality Turkish text corpus, but it may still generate toxic, biased, or unethical content. It is highly recommended to use the model responsibly and make sure that the generated content is appropriate for the use case. Please use the model responsibly and report any issues. ## License: Apache 2.0 The model is licensed under the Apache 2.0 License. It is free to use for any purpose, including commercial use. We encourage users to contribute to the model and report any issues. However, the model is provided "as is" without warranty of any kind. ## Citation If you use the model, please cite the following paper: ```bibtex @inproceedings{safaya-etal-2022-mukayese, title = "Mukayese: {T}urkish {NLP} Strikes Back", author = "Safaya, Ali and Kurtulu{\c{s}}, Emirhan and Goktogan, Arda and Yuret, Deniz", editor = "Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.69", doi = "10.18653/v1/2022.findings-acl.69", pages = "846--863", } ``` ## Acknowledgments During this work, Ali Safaya was supported by [KUIS AI Center](https://ai.ku.edu.tr/) fellowship. Moreover, the pre-training of these models were performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center ([TRUBA](https://www.truba.gov.tr/index.php/en/main-page/) resources).
{"datasets": ["oscar", "mc4"], "language": ["tr"], "license": "apache-2.0", "pipeline_tag": "text-generation", "widget": [{"text": "Benim adım Zeynep, ve en sevdiğim kitabın adı:", "example_title": "Benim adım Zeynep"}, {"text": "Bugünkü yemeğimiz", "example_title": "Bugünkü yemeğimiz"}]}
task
[ "TRANSLATION", "SUMMARIZATION" ]
43,037
kzorluoglu/Chameleon-Code_Explation_Gemma29b-v3
kzorluoglu
question-answering
[ "safetensors", "chameleon", "shop", "esono", "question-answering", "en", "base_model:unsloth/gemma-2-9b-bnb-4bit", "base_model:finetune:unsloth/gemma-2-9b-bnb-4bit", "license:apache-2.0", "region:us" ]
2024-10-17T09:57:51Z
2024-10-17T10:16:43+00:00
0
0
--- base_model: - unsloth/gemma-2-9b-bnb-4bit language: - en license: apache-2.0 pipeline_tag: question-answering tags: - chameleon - shop - esono --- # Chameleon-Code_Explation_Gemma29b-v3 - Draft ** Running Project - Draft* Chameleon-Code_Explation_Gemma29b-v3 is a fine-tuned version of the Unsloth Gemma model, optimized for efficient inference using 4-bit quantization. This model has been trained to explain various classes within the Chameleon CMS system, which is a combined shop software and content management system. ## Table of Contents - [Model Overview](#model-overview) - [Getting Started](#getting-started) - [Available Bundles](#available-bundles) - [Available Classes](#available-classes) - [Training Details](#training-details) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Contact](#contact) ## Model Overview Chameleon-Code_Explation_Gemma29b-v3 is a transformer-based language model fine-tuned to understand and explain the structure and components of the Chameleon CMS system. The model can provide non-technical descriptions of CMS components using efficient 4-bit quantization to minimize resource requirements. - **Base Model:** Google Gemma 2 9B - **Library Used:** PEFT - **Language:** English - **License:** Apache 2.0 - **Developer:** kzorluoglu - **Model Source:** [Unsloth GitHub](https://github.com/unslothai/unsloth) ## Getting Started To use the model, you can start by installing the necessary dependencies and downloading the model using the Hugging Face Transformers library. ### Installation Install the `transformers` library: ```sh pip install transformers ``` ### Usage Here is an example code snippet to get started: ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("kzorluoglu/Chameleon-Code_Explation_Gemma29b-v2") tokenizer = AutoTokenizer.from_pretrained("kzorluoglu/Chameleon-Code_Explation_Gemma29b-v2") print("Chatbot is ready!") print("Type 'exit' to end the chat.") print("Ask like this for a good answer:") print("Explain the XYZ class. I don't want to see code, I want only the explanation.") print("Explain the ABC bundle. I don't want to see code, I want only the explanation.") while True: question = input("You: ") if question.lower() == 'exit': print("Ending the chat. Goodbye!") break instruction = f"{question}" inputs = tokenizer([instruction], return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=256, use_cache=True) generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(f"AI: {generated_text[0]}") ``` ## Available Bundles In addition to individual classes, the model has also been trained to explain a wide range of bundles within the Chameleon CMS system, including but not limited to: * AmazonPaymentBundle * ShopOrderStatusBundle * ShopWishlistBundle * ImageHotspotBundle * ShopArticlePreorderBundle * ShopCurrencyBundle * ShopPaymentIPNBundle * ShopAffiliateBundle * ShopBundle * ShopRatingServiceBundle * SearchBundle * ShopArticleReviewBundle * ShopListFilterBundle * ShopPaymentTransactionBundle * ShopArticleDetailPagingBundle * CmsEventBundle * MediaManagerBundle * ViewRendererBundle * CmsCounterBundle * UpdateCounterMigrationBundle * CmsStringUtilitiesBundle * ExternalTrackerBundle * CookieConsentBundle * CmsResultCacheBundle * CmsActionPluginBundle * CmsClassManagerBundle * PkgCoreBundle * ExtranetBundle * CoreBundle * CmsNavigationBundle * JavaScriptMinificationBundle * TrackViewsBundle * CmsTextFieldBundle * CmsCacheBundle * AutoclassesBundle * CmsChangeLogBundle * DatabaseMigrationBundle * Csv2SqlBundle * NewsletterBundle * DistributionBundle * CmsFileManagerBundle * SecurityBundle * SnippetRendererBundle * AtomicLockBundle * TwigDebugBundle To get the explanation for a specific class, use the prompt format: ```text Explain the XYZ bundle. I don't want to see code, I want only the explanation. ``` ## Available Classes The model has been trained to provide explanations for a wide range of classes in the Chameleon CMS system, including but not limited to: * MTFeedbackErrors * AmazonDataConverter * AmazonPaymentConfigFactory * AmazonReferenceIdManager * WebServerExample * OffAmazonPaymentsNotifications_Samples_NotificationSample * OffAmazonPaymentsNotifications_Samples_IpnLogFile * SnsMessageValidator * IpnNotificationParser * XmlNotificationParser * Message * SnsMessageParser * OffAmazonPayments_Model * OffAmazonPaymentsService_Model * OffAmazonPaymentsService_Regions * OffAmazonPaymentsService_RegionSpecificProperties * OffAmazonPaymentsService_Environments * OffAmazonPaymentsService_MerchantValues * OffAmazonPaymentsService_Model_ResponseHeaderMetadata * CancellationExample * CLIExample * SimpleCheckoutExample * AddressConsentSample * SplitShipmentsCheckoutExample * RefundExample * AutomaticPaymentsSimpleCheckoutExample * ShippingAndTaxCostHelper * CountryRates * AmazonPaymentFixturesFactory * ConfigValidator * AmazonPaymentIdMapping * TPkgShopOrderStatusManagerEndPoint * PkgShopWishlistMailHistory * PkgShopWishlistArticle * PkgShopWishlistOrderItem * PkgShopWishlist * PkgImageHotspotItemMarker * PkgImageHotspot * PkgImageHotspotItemSpot * PkgImageHotspotItem * UpdateProductStockListener * ShopCurrencyEvents * TPkgShopPaymentIPNManager * TPkgShopPaymentIPNRequest * TPkgShopPaymentIPN_TransactionDetails * PkgShopPaymentIpnMessage * PkgShopPaymentIpnTrigger * PkgShopPaymentIpnStatus * PkgShopPaymentIpnMessageTrigger * PkgShopAffiliate * PkgShopAffiliateParameter * ShopEvents * TShopCategoryTree * MTShopArticleListResponse * TShopBreadcrumbItem * TShopBasketArticleCustomDataValidationError * TdbShopArticle * ShopPaymentHandlerMock * BasketVariableReplacer * UpdateProductStatisticsListener * UpdateVariantParentStockListener * UpdateProductStockListener * OrderPaymentInfo * ShopPaymentConfigRawValue * ShopSystemPage * PkgShopArticlePreorder * Shop * ShopShippingGroupHandler * ShopShippingGroup * ShopWrappingCard * ShopCategoryTab * PkgShopListfilterItem * PkgShopStatisticGroup * ShopVat * PkgShopFooterCategory * PkgShopListfilterModuleConfig * ShopSuggestArticleLog * ShopBankAccount * ShopCategory * ShopDiscount * PkgShopPrimaryNavi * ShopShippingType * ShopWrapping * PkgShopListfilterItemType * PkgShopCurrency * PkgShopListfilter * ShopUserNoticeList * ShopSystemInfoModuleConfig * PkgShopArticleReviewModuleShopArticleReviewConfiguration * ShopSystemInfo * ShopPaymentHandlerGroup * ShopPaymentMethod * ShopPaymentHandlerGroupConfig * ShopPaymentHandler * ShopPaymentHandlerParameter * ShopArticleCatalogConfDefaultOrder * ShopStockMessageTrigger * ShopVariantType * ShopArticleImageSize * ShopAttribute * ShopBundleArticle * ShopArticleDocumentType * ShopArticleMarker * ShopArticle * ShopArticleImage * ShopArticleReview * ShopArticleCatalogConf * ShopArticleContributor * ShopArticleDocument * ShopVariantSet * ShopStockMessage * ShopArticleType * ShopVariantDisplayHandler * ShopUnitOfMeasurement * ShopArticleStats * ShopAttributeValue * ShopVariantTypeHandler * ShopContributorType * ShopContributor * ShopVariantTypeValue * ShopManufacturerModuleConf * ShopArticleStock * ShopArticlePreviewImage * ShopManufacturer * ShopArticleGroup * ShopVoucherUse * ShopVoucherSeriesSponsor * ShopUserPurchasedVoucher * ShopVoucherSeries * ShopVoucher * ShopModuleArticleListArticle * ShopModuleArticlelistOrderby * ShopModuleArticleList * ShopModuleArticleListFilter * ShopOrderBasket * ShopOrder * ShopOrderStatusCode * ShopOrderExportLog * ShopOrderStatusItem * ShopOrderStatus * ShopOrderVat * ShopOrderShippingGroupParameter * ShopOrderItem * ShopOrderStep * ShopOrderBundleArticle * ShopOrderPaymentMethodParameter * ShopOrderDiscount * SearchSuggestController * XmlStreamer * TShopAuskunftAPI * PkgShopRatingServiceRating * PkgShopRatingServiceTeaserCnf * PkgShopRatingServiceWidgetConfig * PkgShopRatingServiceHistory * PkgShopRatingService * TShopModuleArticleListFilterAutoParent * TdbShopModuleArticleListFilter * SearchResultTriggerCmsObserverListener * SearchResultLoggerListener * ShopSearchFieldWeight * ShopSearchCloudWord * ShopSearchCache * ShopSearchQuery * ShopSearchCacheItem * ShopSearchIndexer * ShopSearchKeywordArticle * ShopSearchLog * ShopSearchIgnoreWord * AuthorDisplayConstants * Slider * Select * DbAdapter * TPkgShopPaymentTransactionItemDataEndPoint * TPkgShopPaymentTransactionManagerEndPoint * TPkgShopPaymentTransactionContextEndPoint * PkgShopPaymentTransactionPosition * PkgShopPaymentTransactionType * PkgShopPaymentTransaction * AddParametersToUrlService * CacheUtil * TPkgCmsEventManager * DeleteMediaConnectionsListener * ViewRenderer * TPkgViewRendererMapper_ListHandlerData * TPkgViewRendererLessCompiler * TPkgViewRendererSnippetResourceCollectorEndPoint * TPkgViewRendererSnippetDummyData * TPkgViewRendererSnippetGalleryItem * MapperVirtualSourceObject * User * Article * GenerateCssController * CmsCounter * UpdateCounterMigrator * UpdateCounterVersionMigrator * MigrationConfigGenerator * MigrateCountersListener * TPkgCmsStringUtilities_HTML * TPkgCmsStringUtilities_ReadConfig * TPkgCmsStringUtilities_CleanPHPCode * TPkgCmsStringUtilities_iOSMailURLEncoder * TPkgCmsStringUtilities_PathUtils * SearchResultTriggerCmsObserverListener * CategoryTriggerCmsObserverListener * AddCookieConsentIncludesListener * TPkgCmsResultCacheManager * TPkgCmsActionPluginManager * AbstractPkgActionPlugin * TPkgCmsVirtualClassManager * TPkgCmsCoreSendToHost * TPkgCoreUtility_Network * TPkgCmsCoreLayoutPluginManager * TPkgCmsCoreParameterContainerParameterDefinition * AbstractPkgCmsCoreParameterContainer * ExtranetEvents * ExtranetUserConstants * RefreshAuthenticityTokenListener * RehashPasswordListener * DelayFailedLoginAttemptListener * LoginByTokenController * ServiceLocator * CoreEvents * CronJobScheduleDataModel * chameleon * TableConfigurationDataModel * CmsMasterPagdef * DownloadLinkDataModel * PagePath * File * Error * UploaderController * UploaderParametersDataModel * UploadedFileDataModel * StripVirtualFieldsFromQuery * CronJobThatDoesNotExtendTCMSCronJob * HtmlHelper * TGlobalBase * TModuleLoader * TModelBase * MTCustomListCoreConfig * TTemplateTools * TCMRenderMediaTreeSelectBox * TCMSRegistry * TCMSSmartURL * TCMSTidyHTMLCleaner * TCMSLockManager * TCMSURLHistory * TCMSRssHandler * TCMSRender * TPkgDependencyInjection * TCMSLogChange * TCMSViewPortManager * TCMSRenderSeoPattern * TCMSPageDefinitionFile * TCMSRenderDocumentTreeSelectBox * FalseValueCacheEntry * TCMSMemcache * TCMSTreeWidget * TCMSTableToClass_MockRecord * TCMSTableToClass * TCacheManagerRuntimeCache * TViewParser * TCMSSmartURLData * TCMSSmartURLHandler * TCMSParameterMapperEndPoint * TCMSMenuItem * TCMSSpellcheck * MTFeedbackErrorsCore * TCMSErrorHandler * MySqlLegacySupport * TCMSFile * TDbChangeLogManagerForModules * TCMSTableEditorManager * TCMSTableEditorEndPoint * TCMSTableEditorMenuItem * TCMSMessageManager * TCMSImageEndpoint * THTMLTable * THTMLTableColumn * HtmlParser * TGoogleMapMarker * TGoogleOverviewMap * TGoogleMapEndPoint * TGoogleMapIcon * TCMSBreadcrumbNavi * TCMSBreadcrumbNaviItem * TGroupTableField * TGroupTable * TGroupTableStyle * TPkgRunFrontendActionStatus * TCMSGroupedStatistics * TCMSGroupedStatisticsGroup * TCCustomNavigation * SQLParser * SQLParenthesesParser * imageMagick * antiSpam * TPkgCmsSessionWakeUpService * TCMSListManagerEndPoint * TCMSUserInput * TCMSUserInputFilter_BaseClass * TCMSUpdateManager * TTools * CheckPortalDomainListener * CaseInsensitivePortalExceptionListener * TemplateEngineAccessListener * AddAntispamIncludesListener * BackendBreadcrumbListener * AddControllerIncludesListener * CleanupBreadcrumbAfterDeleteListener * RedirectToSecurePageListener * ClearRoutingCacheListener * InitializeRequestListener * ReleaseOldLocksOnLoginListener * AddGlobalHeadIncludesListener * MigrateSessionListener * AddModuleIncludesListener * ChangeChameleonObjectsLocaleListener * AddBackendToasterMessageListener * NoCacheForSecurePageListener * RehashBackendUserPasswordListener * AllowEmbeddingForDifferentDomainListener * AddJqueryIncludeListener * BackendTreeNodeFactory * RequestInitializer * FieldTranslationUtil * UrlUtil * MltFieldUtil * UrlNormalizationUtil * AbstractDatabaseAccessLayer * TranslationConstants * JsTranslationController * ModuleIconMigrator * MainMenuMigrator * IconMapping * MenuItem * MenuCategory * ChameleonControllerEvents * ImageNotFoundController * BackendAccessCheck * CsrfTokenManagerFactory * Migrator * ComposerData * ComposerJsonModifier * AbstractPkgCmsNavigationNode * JavaScriptMinifyEventListener * ImageCropPresetDataModel * ImageCropDataModel * ImageDataModel * CmsMediaDataModel * TPkgTrackObjectViews * TrackViewsListener * TCMSTextFieldEndPoint * TdbCmsLanguage * RequestListener * AutoClassLoader * MappingDataModel * DataModelParts * name * MediaItemChainUsageFinder * SortColumnCollection * MediaItemUsageChainDeleteService * MediaManagerExtensionCollection * MediaManagerListResult * MediaManagerListRequest * AccessRightsModel * MediaManagerListState * MediaTreeDataModel * MediaItemUsageDataModel * MediaTreeNodeDataModel * MediaItemDataModel * JavascriptPluginRenderedContent * JavascriptPluginMessage * JavascriptPluginConfigurationState * JavascriptPluginConfigurationUrls * JavascriptPluginConfiguration * MediaTreeNodeJsonObject * TCMSChangeLogArchiver * TCMSChangeLogFormatter * SnippetChainModifier * MigrationRecorderStateHandler * MigrationRecorder * QueryWriter * DataModelConverter * TPkgCsv2SqlManager * NewsletterPostProcessorListener * NewsletterUserDataModel * PostUpdateVersionCheck * ChameleonVersion * MatchLevel * InitialBackendUserCreator * TPkgCmsFileManager_Utilities * RedirectOnPendingUpdatesEventListener * SecurityHelperAccess * CmsUserRoleConstants * CmsPermissionAttributeConstants * CmsUserSSOModel * TPkgSnippetRenderer_PregCallbackHelper * TPkgSnippetRenderer_ResourceHelper * TPkgSnippetRendererFilter * AtomicLock * ViewRendererPostRenderListener * LogChangeDataModel * MigrationDataModel * MigrationQueryData * MigrationRecorderConstants * QueryConstants * DatabaseMigrationConstants * MigrationDataModelReducer * To get the explanation for a specific class, use the prompt format: ```text Explain the XYZ class. I don't want to see code, I want only the explanation. ``` ## Training Details ### Training Data The model was trained on data from the following sources: - [Chameleon Shop System](https://github.com/chameleon-system/chameleon-shop) - [Chameleon System](https://github.com/chameleon-system/chameleon-system) - [Chameleon Base](https://github.com/chameleon-system/chameleon-base) ### Training Procedure - **Training Type:** Mixed precision (fp16) with 4-bit quantization - **Prompt Format:** The model is trained to respond to prompts such as: ```text Explain the XYZ class. I don't want to see code, I want only the explanation. Explain the ABC bundle. I don't want to see code, I want only the explanation. ``` ## Evaluation The model was evaluated using specific Chameleon CMS classes to ensure accurate and useful explanations. - **Metrics Used:** - **Perplexity**: Evaluated for language modeling capabilities. - **Accuracy**: Assessed based on how well the model explains Chameleon CMS components. ### Results The model demonstrated good accuracy in explaining the components of Chameleon CMS, providing clear and detailed answers. ## Environmental Impact - **Hardware Type:** Tesla T4 GPU - **Training Hours:** 10 hours - **Cloud Provider:** Google Cloud (US Central) - **Estimated Carbon Emission:** 5 kg CO2eq ## Technical Specifications - **Model Architecture:** Transformer-based model with 4-bit quantization. - **Compute Infrastructure:** - Hardware: Tesla T4 GPU - Software: PyTorch 2.4.1, CUDA 12.1 ## Contact For any questions or issues, you can reach out to: - **Email:** [[email protected]](mailto:[email protected]) ## License The model is available under the Apache 2.0 License. ## Citation If you use this model, please cite it as follows: ```bibtex @misc{kzorluoglu2024chameleon, author = {kzorluoglu}, title = {Chameleon-Code_Explation_Gemma29b-v3}, year = {2024}, url = {https://github.com/kzorluoglu}, note = {Apache 2.0 License} } ```
null
TBD
# Chameleon-Code_Explation_Gemma29b-v3 - Draft ** Running Project - Draft* Chameleon-Code_Explation_Gemma29b-v3 is a fine-tuned version of the Unsloth Gemma model, optimized for efficient inference using 4-bit quantization. This model has been trained to explain various classes within the Chameleon CMS system, which is a combined shop software and content management system. ## Table of Contents - [Model Overview](#model-overview) - [Getting Started](#getting-started) - [Available Bundles](#available-bundles) - [Available Classes](#available-classes) - [Training Details](#training-details) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Contact](#contact) ## Model Overview Chameleon-Code_Explation_Gemma29b-v3 is a transformer-based language model fine-tuned to understand and explain the structure and components of the Chameleon CMS system. The model can provide non-technical descriptions of CMS components using efficient 4-bit quantization to minimize resource requirements. - **Base Model:** Google Gemma 2 9B - **Library Used:** PEFT - **Language:** English - **License:** Apache 2.0 - **Developer:** kzorluoglu - **Model Source:** [Unsloth GitHub](https://github.com/unslothai/unsloth) ## Getting Started To use the model, you can start by installing the necessary dependencies and downloading the model using the Hugging Face Transformers library. ### Installation Install the `transformers` library: ```sh pip install transformers ``` ### Usage Here is an example code snippet to get started: ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("kzorluoglu/Chameleon-Code_Explation_Gemma29b-v2") tokenizer = AutoTokenizer.from_pretrained("kzorluoglu/Chameleon-Code_Explation_Gemma29b-v2") print("Chatbot is ready!") print("Type 'exit' to end the chat.") print("Ask like this for a good answer:") print("Explain the XYZ class. I don't want to see code, I want only the explanation.") print("Explain the ABC bundle. I don't want to see code, I want only the explanation.") while True: question = input("You: ") if question.lower() == 'exit': print("Ending the chat. Goodbye!") break instruction = f"{question}" inputs = tokenizer([instruction], return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=256, use_cache=True) generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(f"AI: {generated_text[0]}") ``` ## Available Bundles In addition to individual classes, the model has also been trained to explain a wide range of bundles within the Chameleon CMS system, including but not limited to: * AmazonPaymentBundle * ShopOrderStatusBundle * ShopWishlistBundle * ImageHotspotBundle * ShopArticlePreorderBundle * ShopCurrencyBundle * ShopPaymentIPNBundle * ShopAffiliateBundle * ShopBundle * ShopRatingServiceBundle * SearchBundle * ShopArticleReviewBundle * ShopListFilterBundle * ShopPaymentTransactionBundle * ShopArticleDetailPagingBundle * CmsEventBundle * MediaManagerBundle * ViewRendererBundle * CmsCounterBundle * UpdateCounterMigrationBundle * CmsStringUtilitiesBundle * ExternalTrackerBundle * CookieConsentBundle * CmsResultCacheBundle * CmsActionPluginBundle * CmsClassManagerBundle * PkgCoreBundle * ExtranetBundle * CoreBundle * CmsNavigationBundle * JavaScriptMinificationBundle * TrackViewsBundle * CmsTextFieldBundle * CmsCacheBundle * AutoclassesBundle * CmsChangeLogBundle * DatabaseMigrationBundle * Csv2SqlBundle * NewsletterBundle * DistributionBundle * CmsFileManagerBundle * SecurityBundle * SnippetRendererBundle * AtomicLockBundle * TwigDebugBundle To get the explanation for a specific class, use the prompt format: ```text Explain the XYZ bundle. I don't want to see code, I want only the explanation. ``` ## Available Classes The model has been trained to provide explanations for a wide range of classes in the Chameleon CMS system, including but not limited to: * MTFeedbackErrors * AmazonDataConverter * AmazonPaymentConfigFactory * AmazonReferenceIdManager * WebServerExample * OffAmazonPaymentsNotifications_Samples_NotificationSample * OffAmazonPaymentsNotifications_Samples_IpnLogFile * SnsMessageValidator * IpnNotificationParser * XmlNotificationParser * Message * SnsMessageParser * OffAmazonPayments_Model * OffAmazonPaymentsService_Model * OffAmazonPaymentsService_Regions * OffAmazonPaymentsService_RegionSpecificProperties * OffAmazonPaymentsService_Environments * OffAmazonPaymentsService_MerchantValues * OffAmazonPaymentsService_Model_ResponseHeaderMetadata * CancellationExample * CLIExample * SimpleCheckoutExample * AddressConsentSample * SplitShipmentsCheckoutExample * RefundExample * AutomaticPaymentsSimpleCheckoutExample * ShippingAndTaxCostHelper * CountryRates * AmazonPaymentFixturesFactory * ConfigValidator * AmazonPaymentIdMapping * TPkgShopOrderStatusManagerEndPoint * PkgShopWishlistMailHistory * PkgShopWishlistArticle * PkgShopWishlistOrderItem * PkgShopWishlist * PkgImageHotspotItemMarker * PkgImageHotspot * PkgImageHotspotItemSpot * PkgImageHotspotItem * UpdateProductStockListener * ShopCurrencyEvents * TPkgShopPaymentIPNManager * TPkgShopPaymentIPNRequest * TPkgShopPaymentIPN_TransactionDetails * PkgShopPaymentIpnMessage * PkgShopPaymentIpnTrigger * PkgShopPaymentIpnStatus * PkgShopPaymentIpnMessageTrigger * PkgShopAffiliate * PkgShopAffiliateParameter * ShopEvents * TShopCategoryTree * MTShopArticleListResponse * TShopBreadcrumbItem * TShopBasketArticleCustomDataValidationError * TdbShopArticle * ShopPaymentHandlerMock * BasketVariableReplacer * UpdateProductStatisticsListener * UpdateVariantParentStockListener * UpdateProductStockListener * OrderPaymentInfo * ShopPaymentConfigRawValue * ShopSystemPage * PkgShopArticlePreorder * Shop * ShopShippingGroupHandler * ShopShippingGroup * ShopWrappingCard * ShopCategoryTab * PkgShopListfilterItem * PkgShopStatisticGroup * ShopVat * PkgShopFooterCategory * PkgShopListfilterModuleConfig * ShopSuggestArticleLog * ShopBankAccount * ShopCategory * ShopDiscount * PkgShopPrimaryNavi * ShopShippingType * ShopWrapping * PkgShopListfilterItemType * PkgShopCurrency * PkgShopListfilter * ShopUserNoticeList * ShopSystemInfoModuleConfig * PkgShopArticleReviewModuleShopArticleReviewConfiguration * ShopSystemInfo * ShopPaymentHandlerGroup * ShopPaymentMethod * ShopPaymentHandlerGroupConfig * ShopPaymentHandler * ShopPaymentHandlerParameter * ShopArticleCatalogConfDefaultOrder * ShopStockMessageTrigger * ShopVariantType * ShopArticleImageSize * ShopAttribute * ShopBundleArticle * ShopArticleDocumentType * ShopArticleMarker * ShopArticle * ShopArticleImage * ShopArticleReview * ShopArticleCatalogConf * ShopArticleContributor * ShopArticleDocument * ShopVariantSet * ShopStockMessage * ShopArticleType * ShopVariantDisplayHandler * ShopUnitOfMeasurement * ShopArticleStats * ShopAttributeValue * ShopVariantTypeHandler * ShopContributorType * ShopContributor * ShopVariantTypeValue * ShopManufacturerModuleConf * ShopArticleStock * ShopArticlePreviewImage * ShopManufacturer * ShopArticleGroup * ShopVoucherUse * ShopVoucherSeriesSponsor * ShopUserPurchasedVoucher * ShopVoucherSeries * ShopVoucher * ShopModuleArticleListArticle * ShopModuleArticlelistOrderby * ShopModuleArticleList * ShopModuleArticleListFilter * ShopOrderBasket * ShopOrder * ShopOrderStatusCode * ShopOrderExportLog * ShopOrderStatusItem * ShopOrderStatus * ShopOrderVat * ShopOrderShippingGroupParameter * ShopOrderItem * ShopOrderStep * ShopOrderBundleArticle * ShopOrderPaymentMethodParameter * ShopOrderDiscount * SearchSuggestController * XmlStreamer * TShopAuskunftAPI * PkgShopRatingServiceRating * PkgShopRatingServiceTeaserCnf * PkgShopRatingServiceWidgetConfig * PkgShopRatingServiceHistory * PkgShopRatingService * TShopModuleArticleListFilterAutoParent * TdbShopModuleArticleListFilter * SearchResultTriggerCmsObserverListener * SearchResultLoggerListener * ShopSearchFieldWeight * ShopSearchCloudWord * ShopSearchCache * ShopSearchQuery * ShopSearchCacheItem * ShopSearchIndexer * ShopSearchKeywordArticle * ShopSearchLog * ShopSearchIgnoreWord * AuthorDisplayConstants * Slider * Select * DbAdapter * TPkgShopPaymentTransactionItemDataEndPoint * TPkgShopPaymentTransactionManagerEndPoint * TPkgShopPaymentTransactionContextEndPoint * PkgShopPaymentTransactionPosition * PkgShopPaymentTransactionType * PkgShopPaymentTransaction * AddParametersToUrlService * CacheUtil * TPkgCmsEventManager * DeleteMediaConnectionsListener * ViewRenderer * TPkgViewRendererMapper_ListHandlerData * TPkgViewRendererLessCompiler * TPkgViewRendererSnippetResourceCollectorEndPoint * TPkgViewRendererSnippetDummyData * TPkgViewRendererSnippetGalleryItem * MapperVirtualSourceObject * User * Article * GenerateCssController * CmsCounter * UpdateCounterMigrator * UpdateCounterVersionMigrator * MigrationConfigGenerator * MigrateCountersListener * TPkgCmsStringUtilities_HTML * TPkgCmsStringUtilities_ReadConfig * TPkgCmsStringUtilities_CleanPHPCode * TPkgCmsStringUtilities_iOSMailURLEncoder * TPkgCmsStringUtilities_PathUtils * SearchResultTriggerCmsObserverListener * CategoryTriggerCmsObserverListener * AddCookieConsentIncludesListener * TPkgCmsResultCacheManager * TPkgCmsActionPluginManager * AbstractPkgActionPlugin * TPkgCmsVirtualClassManager * TPkgCmsCoreSendToHost * TPkgCoreUtility_Network * TPkgCmsCoreLayoutPluginManager * TPkgCmsCoreParameterContainerParameterDefinition * AbstractPkgCmsCoreParameterContainer * ExtranetEvents * ExtranetUserConstants * RefreshAuthenticityTokenListener * RehashPasswordListener * DelayFailedLoginAttemptListener * LoginByTokenController * ServiceLocator * CoreEvents * CronJobScheduleDataModel * chameleon * TableConfigurationDataModel * CmsMasterPagdef * DownloadLinkDataModel * PagePath * File * Error * UploaderController * UploaderParametersDataModel * UploadedFileDataModel * StripVirtualFieldsFromQuery * CronJobThatDoesNotExtendTCMSCronJob * HtmlHelper * TGlobalBase * TModuleLoader * TModelBase * MTCustomListCoreConfig * TTemplateTools * TCMRenderMediaTreeSelectBox * TCMSRegistry * TCMSSmartURL * TCMSTidyHTMLCleaner * TCMSLockManager * TCMSURLHistory * TCMSRssHandler * TCMSRender * TPkgDependencyInjection * TCMSLogChange * TCMSViewPortManager * TCMSRenderSeoPattern * TCMSPageDefinitionFile * TCMSRenderDocumentTreeSelectBox * FalseValueCacheEntry * TCMSMemcache * TCMSTreeWidget * TCMSTableToClass_MockRecord * TCMSTableToClass * TCacheManagerRuntimeCache * TViewParser * TCMSSmartURLData * TCMSSmartURLHandler * TCMSParameterMapperEndPoint * TCMSMenuItem * TCMSSpellcheck * MTFeedbackErrorsCore * TCMSErrorHandler * MySqlLegacySupport * TCMSFile * TDbChangeLogManagerForModules * TCMSTableEditorManager * TCMSTableEditorEndPoint * TCMSTableEditorMenuItem * TCMSMessageManager * TCMSImageEndpoint * THTMLTable * THTMLTableColumn * HtmlParser * TGoogleMapMarker * TGoogleOverviewMap * TGoogleMapEndPoint * TGoogleMapIcon * TCMSBreadcrumbNavi * TCMSBreadcrumbNaviItem * TGroupTableField * TGroupTable * TGroupTableStyle * TPkgRunFrontendActionStatus * TCMSGroupedStatistics * TCMSGroupedStatisticsGroup * TCCustomNavigation * SQLParser * SQLParenthesesParser * imageMagick * antiSpam * TPkgCmsSessionWakeUpService * TCMSListManagerEndPoint * TCMSUserInput * TCMSUserInputFilter_BaseClass * TCMSUpdateManager * TTools * CheckPortalDomainListener * CaseInsensitivePortalExceptionListener * TemplateEngineAccessListener * AddAntispamIncludesListener * BackendBreadcrumbListener * AddControllerIncludesListener * CleanupBreadcrumbAfterDeleteListener * RedirectToSecurePageListener * ClearRoutingCacheListener * InitializeRequestListener * ReleaseOldLocksOnLoginListener * AddGlobalHeadIncludesListener * MigrateSessionListener * AddModuleIncludesListener * ChangeChameleonObjectsLocaleListener * AddBackendToasterMessageListener * NoCacheForSecurePageListener * RehashBackendUserPasswordListener * AllowEmbeddingForDifferentDomainListener * AddJqueryIncludeListener * BackendTreeNodeFactory * RequestInitializer * FieldTranslationUtil * UrlUtil * MltFieldUtil * UrlNormalizationUtil * AbstractDatabaseAccessLayer * TranslationConstants * JsTranslationController * ModuleIconMigrator * MainMenuMigrator * IconMapping * MenuItem * MenuCategory * ChameleonControllerEvents * ImageNotFoundController * BackendAccessCheck * CsrfTokenManagerFactory * Migrator * ComposerData * ComposerJsonModifier * AbstractPkgCmsNavigationNode * JavaScriptMinifyEventListener * ImageCropPresetDataModel * ImageCropDataModel * ImageDataModel * CmsMediaDataModel * TPkgTrackObjectViews * TrackViewsListener * TCMSTextFieldEndPoint * TdbCmsLanguage * RequestListener * AutoClassLoader * MappingDataModel * DataModelParts * name * MediaItemChainUsageFinder * SortColumnCollection * MediaItemUsageChainDeleteService * MediaManagerExtensionCollection * MediaManagerListResult * MediaManagerListRequest * AccessRightsModel * MediaManagerListState * MediaTreeDataModel * MediaItemUsageDataModel * MediaTreeNodeDataModel * MediaItemDataModel * JavascriptPluginRenderedContent * JavascriptPluginMessage * JavascriptPluginConfigurationState * JavascriptPluginConfigurationUrls * JavascriptPluginConfiguration * MediaTreeNodeJsonObject * TCMSChangeLogArchiver * TCMSChangeLogFormatter * SnippetChainModifier * MigrationRecorderStateHandler * MigrationRecorder * QueryWriter * DataModelConverter * TPkgCsv2SqlManager * NewsletterPostProcessorListener * NewsletterUserDataModel * PostUpdateVersionCheck * ChameleonVersion * MatchLevel * InitialBackendUserCreator * TPkgCmsFileManager_Utilities * RedirectOnPendingUpdatesEventListener * SecurityHelperAccess * CmsUserRoleConstants * CmsPermissionAttributeConstants * CmsUserSSOModel * TPkgSnippetRenderer_PregCallbackHelper * TPkgSnippetRenderer_ResourceHelper * TPkgSnippetRendererFilter * AtomicLock * ViewRendererPostRenderListener * LogChangeDataModel * MigrationDataModel * MigrationQueryData * MigrationRecorderConstants * QueryConstants * DatabaseMigrationConstants * MigrationDataModelReducer * To get the explanation for a specific class, use the prompt format: ```text Explain the XYZ class. I don't want to see code, I want only the explanation. ``` ## Training Details ### Training Data The model was trained on data from the following sources: - [Chameleon Shop System](https://github.com/chameleon-system/chameleon-shop) - [Chameleon System](https://github.com/chameleon-system/chameleon-system) - [Chameleon Base](https://github.com/chameleon-system/chameleon-base) ### Training Procedure - **Training Type:** Mixed precision (fp16) with 4-bit quantization - **Prompt Format:** The model is trained to respond to prompts such as: ```text Explain the XYZ class. I don't want to see code, I want only the explanation. Explain the ABC bundle. I don't want to see code, I want only the explanation. ``` ## Evaluation The model was evaluated using specific Chameleon CMS classes to ensure accurate and useful explanations. - **Metrics Used:** - **Perplexity**: Evaluated for language modeling capabilities. - **Accuracy**: Assessed based on how well the model explains Chameleon CMS components. ### Results The model demonstrated good accuracy in explaining the components of Chameleon CMS, providing clear and detailed answers. ## Environmental Impact - **Hardware Type:** Tesla T4 GPU - **Training Hours:** 10 hours - **Cloud Provider:** Google Cloud (US Central) - **Estimated Carbon Emission:** 5 kg CO2eq ## Technical Specifications - **Model Architecture:** Transformer-based model with 4-bit quantization. - **Compute Infrastructure:** - Hardware: Tesla T4 GPU - Software: PyTorch 2.4.1, CUDA 12.1 ## Contact For any questions or issues, you can reach out to: - **Email:** [[email protected]](mailto:[email protected]) ## License The model is available under the Apache 2.0 License. ## Citation If you use this model, please cite it as follows: ```bibtex @misc{kzorluoglu2024chameleon, author = {kzorluoglu}, title = {Chameleon-Code_Explation_Gemma29b-v3}, year = {2024}, url = {https://github.com/kzorluoglu}, note = {Apache 2.0 License} } ```
{"base_model": ["unsloth/gemma-2-9b-bnb-4bit"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "question-answering", "tags": ["chameleon", "shop", "esono"]}
task
[ "TRANSLATION" ]
43,038
TransferGraph/gchhablani_bert-base-cased-finetuned-rte-finetuned-lora-tweet_eval_hate
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:gchhablani/bert-base-cased-finetuned-rte", "base_model:adapter:gchhablani/bert-base-cased-finetuned-rte", "license:apache-2.0", "model-index", "region:us" ]
2024-02-29T13:52:02Z
2024-02-29T13:52:04+00:00
0
0
--- base_model: gchhablani/bert-base-cased-finetuned-rte datasets: - tweet_eval library_name: peft license: apache-2.0 metrics: - accuracy tags: - parquet - text-classification model-index: - name: gchhablani_bert-base-cased-finetuned-rte-finetuned-lora-tweet_eval_hate results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: hate split: validation args: hate metrics: - type: accuracy value: 0.716 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gchhablani_bert-base-cased-finetuned-rte-finetuned-lora-tweet_eval_hate This model is a fine-tuned version of [gchhablani/bert-base-cased-finetuned-rte](https://huggingface.co/gchhablani/bert-base-cased-finetuned-rte) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.716 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.42 | None | 0 | | 0.683 | 0.6229 | 0 | | 0.701 | 0.5190 | 1 | | 0.711 | 0.4779 | 2 | | 0.716 | 0.4639 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gchhablani_bert-base-cased-finetuned-rte-finetuned-lora-tweet_eval_hate This model is a fine-tuned version of [gchhablani/bert-base-cased-finetuned-rte](https://huggingface.co/gchhablani/bert-base-cased-finetuned-rte) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.716 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.42 | None | 0 | | 0.683 | 0.6229 | 0 | | 0.701 | 0.5190 | 1 | | 0.711 | 0.4779 | 2 | | 0.716 | 0.4639 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"base_model": "gchhablani/bert-base-cased-finetuned-rte", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "gchhablani_bert-base-cased-finetuned-rte-finetuned-lora-tweet_eval_hate", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "hate", "split": "validation", "args": "hate"}, "metrics": [{"type": "accuracy", "value": 0.716, "name": "accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,039
rttl-ai/foody-bert
rttl-ai
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2012.15349", "arxiv:1910.09700", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-09-11T22:55:55Z
2023-06-18T22:07:52+00:00
47
3
--- license: bigscience-bloom-rail-1.0 --- # Model Card for Foody Bert # Model Details ## Model Description Foody-bert results from the second round of fine-tuning on the text classification task. Continuation of fine-tuning of [senty-bert](https://huggingface.co/rttl-ai/senty-bert), which is fine-tuned on yelp reviews and Stanford sentiment treebank with ternary labels (neutral, positive, negative). - **Language(s) (NLP):** English - **License:** bigscience-bloom-rail-1.0 - **Related Models:** More information needed - **Parent Model:** More information needed - **Resources for more information:** - [Associated Paper](https://arxiv.org/abs/2012.15349) # Uses ## Direct Use - The primary intended use is in sentiment analysis of the texts of product and service reviews, and this is the domain in which the model has been evaluated to date. - We urge caution about using these models for sentiment prediction in other domains. For example, sentiment expression in medical contexts and professional evaluations can be different from sentiment expression in product/service reviews. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations - We recommend careful study of how these models behave, even when they are used in the domain on which they were trained and assessed. The models are deep learning models about which it is challenging to gain full analytic command; two examples that appear synonymous to human readers can receive very different predictions from these models, in ways that are hard to anticipate or explain, and so it is crucial to do continual qualitative and quantitative evaluation as part of any deployment. - We advise even more caution when using these models in new domains, as sentiment expression can shift in subtle (and not-so-subtle) ways across different domains, and this could lead specific phenomena to be mis-handled in ways that could have dramatic and pernicious consequences. # Training Details ## Training Data The model was trained on product/service reviews from Yelp, reviews from Amazon, reviews from IMDB (as defined by [this dataset](https://ai.stanford.edu/~amaas/data/sentiment/)), sentences from Rotten Tomatoes (as given by the [Stanford Sentiment Treebank](https://nlp.stanford.edu/sentiment/)), the [Customer Reviews](https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html) dataset, and on subsets of the DynaSent dataset. The dataset mainly contains restaurant review data. For extensive details on these datasets are included in the [associated Paper](https://arxiv.org/abs/2012.15349). # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation **BibTeX:** More information needed **APA:** ``` @article{potts-etal-2020-dynasent, title={{DynaSent}: A Dynamic Benchmark for Sentiment Analysis}, author={Potts, Christopher and Wu, Zhengxuan and Geiger, Atticus and Kiela, Douwe}, journal={arXiv preprint arXiv:2012.15349}, url={https://arxiv.org/abs/2012.15349}, year={2020}} ``` # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("rttl-ai/foody-bert") model = AutoModelForSequenceClassification.from_pretrained("rttl-ai/foody-bert") ``` </details>
null
Non_BioNLP
# Model Card for Foody Bert # Model Details ## Model Description Foody-bert results from the second round of fine-tuning on the text classification task. Continuation of fine-tuning of [senty-bert](https://huggingface.co/rttl-ai/senty-bert), which is fine-tuned on yelp reviews and Stanford sentiment treebank with ternary labels (neutral, positive, negative). - **Language(s) (NLP):** English - **License:** bigscience-bloom-rail-1.0 - **Related Models:** More information needed - **Parent Model:** More information needed - **Resources for more information:** - [Associated Paper](https://arxiv.org/abs/2012.15349) # Uses ## Direct Use - The primary intended use is in sentiment analysis of the texts of product and service reviews, and this is the domain in which the model has been evaluated to date. - We urge caution about using these models for sentiment prediction in other domains. For example, sentiment expression in medical contexts and professional evaluations can be different from sentiment expression in product/service reviews. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations - We recommend careful study of how these models behave, even when they are used in the domain on which they were trained and assessed. The models are deep learning models about which it is challenging to gain full analytic command; two examples that appear synonymous to human readers can receive very different predictions from these models, in ways that are hard to anticipate or explain, and so it is crucial to do continual qualitative and quantitative evaluation as part of any deployment. - We advise even more caution when using these models in new domains, as sentiment expression can shift in subtle (and not-so-subtle) ways across different domains, and this could lead specific phenomena to be mis-handled in ways that could have dramatic and pernicious consequences. # Training Details ## Training Data The model was trained on product/service reviews from Yelp, reviews from Amazon, reviews from IMDB (as defined by [this dataset](https://ai.stanford.edu/~amaas/data/sentiment/)), sentences from Rotten Tomatoes (as given by the [Stanford Sentiment Treebank](https://nlp.stanford.edu/sentiment/)), the [Customer Reviews](https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html) dataset, and on subsets of the DynaSent dataset. The dataset mainly contains restaurant review data. For extensive details on these datasets are included in the [associated Paper](https://arxiv.org/abs/2012.15349). # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation **BibTeX:** More information needed **APA:** ``` @article{potts-etal-2020-dynasent, title={{DynaSent}: A Dynamic Benchmark for Sentiment Analysis}, author={Potts, Christopher and Wu, Zhengxuan and Geiger, Atticus and Kiela, Douwe}, journal={arXiv preprint arXiv:2012.15349}, url={https://arxiv.org/abs/2012.15349}, year={2020}} ``` # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("rttl-ai/foody-bert") model = AutoModelForSequenceClassification.from_pretrained("rttl-ai/foody-bert") ``` </details>
{"license": "bigscience-bloom-rail-1.0"}
task
[ "TEXT_CLASSIFICATION" ]
43,040
richardburleigh/SuperQA-7B-v0.1
richardburleigh
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "RAG", "QA", "SQuAD", "Question Answering", "en", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-12-26T06:35:03Z
2023-12-26T07:07:20+00:00
3
0
--- language: - en library_name: transformers license: gpl-3.0 tags: - RAG - QA - SQuAD - Question Answering --- ## Model Card for SuperQA-7B This model is a fine-tuned version of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically designed for Question Answering (QA) tasks. It has been trained on a private dataset comprising 120,000 document, question, and answer pairs. To my knowledge, this is the most capable 7B model for Retrieval Augmented Generation (RAG) tasks. SuperQA responds in Markdown format. ## Prompt Format This model was trained only with the following prompt: ``` <s>[INST] Respond with a detailed and relevant answer to my question using only information from the provided context. <|context|> <|doc|> {Your document} <|/doc|> <|/context|> <|question|>{Your question?}<|/question|> [/INST] ``` ## Limitations While the model is designed to be accurate and relevant, its performance is contingent on the quality and relevance of the provided context. Answers may be less accurate if the context is insufficient or not directly related to the question. Additionally, the model's training on a specific dataset may limit its effectiveness in answering questions outside the scope of the training data. ## Disclaimer This model is provided as-is without any guarantees of performance or accuracy. Users should not rely solely on this model for critical decisions or interpretations. The developers of this model are not responsible for any direct or indirect consequences arising from its use. It is the responsibility of the user to ensure that the model's output is appropriate for their specific context and requirements.
null
Non_BioNLP
## Model Card for SuperQA-7B This model is a fine-tuned version of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically designed for Question Answering (QA) tasks. It has been trained on a private dataset comprising 120,000 document, question, and answer pairs. To my knowledge, this is the most capable 7B model for Retrieval Augmented Generation (RAG) tasks. SuperQA responds in Markdown format. ## Prompt Format This model was trained only with the following prompt: ``` <s>[INST] Respond with a detailed and relevant answer to my question using only information from the provided context. <|context|> <|doc|> {Your document} <|/doc|> <|/context|> <|question|>{Your question?}<|/question|> [/INST] ``` ## Limitations While the model is designed to be accurate and relevant, its performance is contingent on the quality and relevance of the provided context. Answers may be less accurate if the context is insufficient or not directly related to the question. Additionally, the model's training on a specific dataset may limit its effectiveness in answering questions outside the scope of the training data. ## Disclaimer This model is provided as-is without any guarantees of performance or accuracy. Users should not rely solely on this model for critical decisions or interpretations. The developers of this model are not responsible for any direct or indirect consequences arising from its use. It is the responsibility of the user to ensure that the model's output is appropriate for their specific context and requirements.
{"language": ["en"], "library_name": "transformers", "license": "gpl-3.0", "tags": ["RAG", "QA", "SQuAD", "Question Answering"]}
task
[ "QUESTION_ANSWERING" ]
43,041
gokuls/distilbert_add_GLUE_Experiment_cola_192
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-26T12:27:38Z
2023-01-26T12:30:54+00:00
137
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: distilbert_add_GLUE_Experiment_cola_192 results: - task: type: text-classification name: Text Classification dataset: name: GLUE COLA type: glue config: cola split: validation args: cola metrics: - type: matthews_correlation value: 0.0 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_cola_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6182 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6141 | 1.0 | 34 | 0.6201 | 0.0 | | 0.6079 | 2.0 | 68 | 0.6185 | 0.0 | | 0.6072 | 3.0 | 102 | 0.6184 | 0.0 | | 0.6083 | 4.0 | 136 | 0.6193 | 0.0 | | 0.6075 | 5.0 | 170 | 0.6182 | 0.0 | | 0.607 | 6.0 | 204 | 0.6185 | 0.0 | | 0.6082 | 7.0 | 238 | 0.6182 | 0.0 | | 0.6085 | 8.0 | 272 | 0.6185 | 0.0 | | 0.608 | 9.0 | 306 | 0.6202 | 0.0 | | 0.6084 | 10.0 | 340 | 0.6189 | 0.0 | | 0.6078 | 11.0 | 374 | 0.6189 | 0.0 | | 0.6072 | 12.0 | 408 | 0.6186 | 0.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_cola_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6182 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6141 | 1.0 | 34 | 0.6201 | 0.0 | | 0.6079 | 2.0 | 68 | 0.6185 | 0.0 | | 0.6072 | 3.0 | 102 | 0.6184 | 0.0 | | 0.6083 | 4.0 | 136 | 0.6193 | 0.0 | | 0.6075 | 5.0 | 170 | 0.6182 | 0.0 | | 0.607 | 6.0 | 204 | 0.6185 | 0.0 | | 0.6082 | 7.0 | 238 | 0.6182 | 0.0 | | 0.6085 | 8.0 | 272 | 0.6185 | 0.0 | | 0.608 | 9.0 | 306 | 0.6202 | 0.0 | | 0.6084 | 10.0 | 340 | 0.6189 | 0.0 | | 0.6078 | 11.0 | 374 | 0.6189 | 0.0 | | 0.6072 | 12.0 | 408 | 0.6186 | 0.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_add_GLUE_Experiment_cola_192", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.0, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,042
tsei902/simplify_dutch
tsei902
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "sentence_simplification", "simplification", "text2text", "nl", "autotrain_compatible", "text-generation-inference", "region:us" ]
2024-02-29T15:36:26Z
2024-03-01T13:46:19+00:00
13
0
--- language: - nl metrics: - sari - bleu pipeline_tag: text2text-generation tags: - sentence_simplification - simplification - text2text inference: false --- ## Model Details # simplify_dutch This is the source code for my thesis on "Controllable Sentence Simplification in Dutch" in the Masters of AI at KU Leuven. The full code can be found at: https://github.com/tsei902/simplify_dutch # Data The origin of the datasets in resources/datasets is: 1) Wikilarge, available under: https://github.com/XingxingZhang/dress The wikilarge data is limited the first 10000 rows. 2) ASSET, available under: https://github.com/facebookresearch Which both have been translated to Dutch. # Model The Dutch T5 model t5-base-dutch from Hugging Face has been adopted and trained on the task of sentence simplification. The folder /saved model contains the final trained model on 10000 rows of data, as stated in the Thesis. # Sequence: 1) TRAINING DATA needs preprocessing with preprocessor.py 2) Generation can be done with generate_on_pretrained.py with a prior adjustment of 3) The generation parameters in model.simplify() where the decoding method needs to be chosen (Greedy decoding, Top-p & top-k, or Beam search) 4) Manual scoring of a generated text is possible with evaluate.py # Further remarks: 1) The folder resources/processed data contains the training set with the prepended control tokens 2) The folder resources/DUMPS contains the Word embeddings from Fares et al. (2017) have been used. The data is available under: http://vectors.nlpl.eu/repository. (Fares, M., Kutuzov, A., Oepen, S., & Velldal, E. (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources. Proceedings of the 21st Nordic Conference on Computational Linguistics, Gothenburg, Sweden.) 3) The folder resources/outputs/final_decoder_outputs contains the final generated text per decoding strategy (Greedy decoding, Top-p & top-k, or Beam search) for both the full test set and the sample dataset 4) The folder translations contains sampled text (106 and 84 rows) from the original English datasets (WIKILarge and ASSET), a machine-translated version as well as the human translated references. # Credits The preprocessor.py and the utils.py contain code that has been adapted from https://github.com/KimChengSHEANG/TS_T5 (Sheang, K. C., & Saggion, H. (2021). Controllable Sentence Simplification with a Unified Text-to-Text Transfer Transformer.INLG 2021 International Conference on Natural Language Generation, Aberdeen, Scotland, UK.) The preprocessor.py has been adapted to the usage of Dutch. ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Theresa Seidl - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** Dutsch - **License:** [More Information Needed] - **Finetuned from model [optional]:** https://huggingface.co/yhavinga/t5-base-dutch ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/tsei902/simplify_dutch - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed]
null
Non_BioNLP
## Model Details # simplify_dutch This is the source code for my thesis on "Controllable Sentence Simplification in Dutch" in the Masters of AI at KU Leuven. The full code can be found at: https://github.com/tsei902/simplify_dutch # Data The origin of the datasets in resources/datasets is: 1) Wikilarge, available under: https://github.com/XingxingZhang/dress The wikilarge data is limited the first 10000 rows. 2) ASSET, available under: https://github.com/facebookresearch Which both have been translated to Dutch. # Model The Dutch T5 model t5-base-dutch from Hugging Face has been adopted and trained on the task of sentence simplification. The folder /saved model contains the final trained model on 10000 rows of data, as stated in the Thesis. # Sequence: 1) TRAINING DATA needs preprocessing with preprocessor.py 2) Generation can be done with generate_on_pretrained.py with a prior adjustment of 3) The generation parameters in model.simplify() where the decoding method needs to be chosen (Greedy decoding, Top-p & top-k, or Beam search) 4) Manual scoring of a generated text is possible with evaluate.py # Further remarks: 1) The folder resources/processed data contains the training set with the prepended control tokens 2) The folder resources/DUMPS contains the Word embeddings from Fares et al. (2017) have been used. The data is available under: http://vectors.nlpl.eu/repository. (Fares, M., Kutuzov, A., Oepen, S., & Velldal, E. (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources. Proceedings of the 21st Nordic Conference on Computational Linguistics, Gothenburg, Sweden.) 3) The folder resources/outputs/final_decoder_outputs contains the final generated text per decoding strategy (Greedy decoding, Top-p & top-k, or Beam search) for both the full test set and the sample dataset 4) The folder translations contains sampled text (106 and 84 rows) from the original English datasets (WIKILarge and ASSET), a machine-translated version as well as the human translated references. # Credits The preprocessor.py and the utils.py contain code that has been adapted from https://github.com/KimChengSHEANG/TS_T5 (Sheang, K. C., & Saggion, H. (2021). Controllable Sentence Simplification with a Unified Text-to-Text Transfer Transformer.INLG 2021 International Conference on Natural Language Generation, Aberdeen, Scotland, UK.) The preprocessor.py has been adapted to the usage of Dutch. ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Theresa Seidl - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** Dutsch - **License:** [More Information Needed] - **Finetuned from model [optional]:** https://huggingface.co/yhavinga/t5-base-dutch ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/tsei902/simplify_dutch - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed]
{"language": ["nl"], "metrics": ["sari", "bleu"], "pipeline_tag": "text2text-generation", "tags": ["sentence_simplification", "simplification", "text2text"], "inference": false}
task
[ "TRANSLATION" ]
43,043
Neurora/opus-tatoeba-ell-eng
Neurora
translation
[ "safetensors", "marian", "translation", "el", "en", "license:cc-by-nc-sa-4.0", "region:us" ]
2025-02-26T15:44:51Z
2025-02-26T15:46:42+00:00
38
0
--- language: - el - en license: cc-by-nc-sa-4.0 pipeline_tag: translation --- # Opus Tatoeba | Greek -> English * dataset: opus * model: transformer * source language(s): ell * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download: [opus-2021-02-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-eng/opus-2021-02-18.zip) * test set translations: [opus-2021-02-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-eng/opus-2021-02-18.test.txt) * test set scores: [opus-2021-02-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-eng/opus-2021-02-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | #sent | #words | BP | |---------|-------|-------|-------|--------|----| | Tatoeba-test.ell-eng | 69.2 | 0.801 | 10000 | 60970 | 0.978 |
null
Non_BioNLP
# Opus Tatoeba | Greek -> English * dataset: opus * model: transformer * source language(s): ell * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download: [opus-2021-02-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-eng/opus-2021-02-18.zip) * test set translations: [opus-2021-02-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-eng/opus-2021-02-18.test.txt) * test set scores: [opus-2021-02-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-eng/opus-2021-02-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | #sent | #words | BP | |---------|-------|-------|-------|--------|----| | Tatoeba-test.ell-eng | 69.2 | 0.801 | 10000 | 60970 | 0.978 |
{"language": ["el", "en"], "license": "cc-by-nc-sa-4.0", "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
43,044
synpjh/distilbert-base-uncased-finetuned-emotion
synpjh
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-12-04T17:15:24Z
2023-03-13T10:30:00+00:00
14
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9245 name: Accuracy - type: f1 value: 0.9244671567403487 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2188 - Accuracy: 0.9245 - F1: 0.9245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8065 | 1.0 | 250 | 0.3138 | 0.905 | 0.9022 | | 0.2487 | 2.0 | 500 | 0.2188 | 0.9245 | 0.9245 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.1.0.dev20230312 - Datasets 2.10.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2188 - Accuracy: 0.9245 - F1: 0.9245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8065 | 1.0 | 250 | 0.3138 | 0.905 | 0.9022 | | 0.2487 | 2.0 | 500 | 0.2188 | 0.9245 | 0.9245 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.1.0.dev20230312 - Datasets 2.10.1 - Tokenizers 0.13.2
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9245, "name": "Accuracy"}, {"type": "f1", "value": 0.9244671567403487, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,045
Helsinki-NLP/opus-mt-tc-big-en-ko
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "ko", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-08-12T08:02:12Z
2023-10-10T10:29:58+00:00
1,486
16
--- language: - en - ko license: cc-by-4.0 tags: - translation - opus-mt-tc model-index: - name: opus-mt-tc-big-en-ko results: - task: type: translation name: Translation eng-kor dataset: name: flores101-devtest type: flores_101 args: eng kor devtest metrics: - type: bleu value: 13.7 name: BLEU - type: chrf value: 0.36399 name: chr-F --- # opus-mt-tc-big-en-ko ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from English (en) to Korean (ko). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-28 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): - Target Language(s): - Valid Target Language Labels: - **Original Model**: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT eng-kor README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-kor/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>><<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "2, 4, 6 etc. are even numbers.", "Yes." ] model_name = "pytorch-models/opus-mt-tc-big-en-ko" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # 2, 4, 6 등은 짝수입니다. # 그래 ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ko") print(pipe("2, 4, 6 etc. are even numbers.")) # expected output: 2, 4, 6 등은 짝수입니다. ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.test.txt) * test set scores: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Fri Aug 12 11:02:03 EEST 2022 * port machine: LM0-400-22516.local
null
Non_BioNLP
# opus-mt-tc-big-en-ko ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from English (en) to Korean (ko). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-28 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): - Target Language(s): - Valid Target Language Labels: - **Original Model**: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT eng-kor README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-kor/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>><<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "2, 4, 6 etc. are even numbers.", "Yes." ] model_name = "pytorch-models/opus-mt-tc-big-en-ko" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # 2, 4, 6 등은 짝수입니다. # 그래 ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ko") print(pipe("2, 4, 6 etc. are even numbers.")) # expected output: 2, 4, 6 등은 짝수입니다. ``` ## Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * test set translations: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.test.txt) * test set scores: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| ## Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 8b9f0b0 * port time: Fri Aug 12 11:02:03 EEST 2022 * port machine: LM0-400-22516.local
{"language": ["en", "ko"], "license": "cc-by-4.0", "tags": ["translation", "opus-mt-tc"], "model-index": [{"name": "opus-mt-tc-big-en-ko", "results": [{"task": {"type": "translation", "name": "Translation eng-kor"}, "dataset": {"name": "flores101-devtest", "type": "flores_101", "args": "eng kor devtest"}, "metrics": [{"type": "bleu", "value": 13.7, "name": "BLEU"}, {"type": "chrf", "value": 0.36399, "name": "chr-F"}]}]}]}
task
[ "TRANSLATION" ]
43,046
Vishnou/distilbert_base_SST2
Vishnou
text-classification
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "dataset:sst2", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-15T21:58:55Z
2023-11-18T19:18:01+00:00
53
0
--- base_model: distilbert-base-uncased datasets: - sst2 license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert_base_SST2 results: - task: type: text-classification name: Text Classification dataset: name: sst2 type: sst2 config: default split: validation args: default metrics: - type: accuracy value: 0.8990825688073395 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_base_SST2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4181 - Accuracy: 0.8991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4378 | 0.06 | 500 | 0.3452 | 0.8601 | | 0.343 | 0.12 | 1000 | 0.3483 | 0.8578 | | 0.3342 | 0.18 | 1500 | 0.3373 | 0.8704 | | 0.308 | 0.24 | 2000 | 0.4102 | 0.8819 | | 0.2932 | 0.3 | 2500 | 0.3546 | 0.8830 | | 0.3116 | 0.36 | 3000 | 0.3609 | 0.8716 | | 0.2805 | 0.42 | 3500 | 0.3800 | 0.8945 | | 0.2655 | 0.48 | 4000 | 0.4131 | 0.8842 | | 0.2504 | 0.53 | 4500 | 0.4299 | 0.8830 | | 0.2543 | 0.59 | 5000 | 0.5196 | 0.8727 | | 0.2409 | 0.65 | 5500 | 0.4387 | 0.8807 | | 0.2414 | 0.71 | 6000 | 0.4121 | 0.8922 | | 0.2319 | 0.77 | 6500 | 0.3772 | 0.8830 | | 0.247 | 0.83 | 7000 | 0.4179 | 0.8876 | | 0.2233 | 0.89 | 7500 | 0.3544 | 0.8945 | | 0.2202 | 0.95 | 8000 | 0.4160 | 0.8865 | | 0.2242 | 1.01 | 8500 | 0.5125 | 0.8784 | | 0.1296 | 1.07 | 9000 | 0.4212 | 0.8842 | | 0.1429 | 1.13 | 9500 | 0.4675 | 0.8968 | | 0.1466 | 1.19 | 10000 | 0.5034 | 0.8922 | | 0.1626 | 1.25 | 10500 | 0.4431 | 0.8945 | | 0.1459 | 1.31 | 11000 | 0.5001 | 0.8922 | | 0.1489 | 1.37 | 11500 | 0.4739 | 0.8968 | | 0.1591 | 1.43 | 12000 | 0.3852 | 0.8945 | | 0.1211 | 1.48 | 12500 | 0.4648 | 0.8945 | | 0.1275 | 1.54 | 13000 | 0.5281 | 0.8956 | | 0.1302 | 1.6 | 13500 | 0.4411 | 0.8933 | | 0.1313 | 1.66 | 14000 | 0.4914 | 0.8979 | | 0.134 | 1.72 | 14500 | 0.3923 | 0.8979 | | 0.1355 | 1.78 | 15000 | 0.4164 | 0.8956 | | 0.1263 | 1.84 | 15500 | 0.4293 | 0.8945 | | 0.1326 | 1.9 | 16000 | 0.4185 | 0.8933 | | 0.1315 | 1.96 | 16500 | 0.4181 | 0.8991 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_base_SST2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4181 - Accuracy: 0.8991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4378 | 0.06 | 500 | 0.3452 | 0.8601 | | 0.343 | 0.12 | 1000 | 0.3483 | 0.8578 | | 0.3342 | 0.18 | 1500 | 0.3373 | 0.8704 | | 0.308 | 0.24 | 2000 | 0.4102 | 0.8819 | | 0.2932 | 0.3 | 2500 | 0.3546 | 0.8830 | | 0.3116 | 0.36 | 3000 | 0.3609 | 0.8716 | | 0.2805 | 0.42 | 3500 | 0.3800 | 0.8945 | | 0.2655 | 0.48 | 4000 | 0.4131 | 0.8842 | | 0.2504 | 0.53 | 4500 | 0.4299 | 0.8830 | | 0.2543 | 0.59 | 5000 | 0.5196 | 0.8727 | | 0.2409 | 0.65 | 5500 | 0.4387 | 0.8807 | | 0.2414 | 0.71 | 6000 | 0.4121 | 0.8922 | | 0.2319 | 0.77 | 6500 | 0.3772 | 0.8830 | | 0.247 | 0.83 | 7000 | 0.4179 | 0.8876 | | 0.2233 | 0.89 | 7500 | 0.3544 | 0.8945 | | 0.2202 | 0.95 | 8000 | 0.4160 | 0.8865 | | 0.2242 | 1.01 | 8500 | 0.5125 | 0.8784 | | 0.1296 | 1.07 | 9000 | 0.4212 | 0.8842 | | 0.1429 | 1.13 | 9500 | 0.4675 | 0.8968 | | 0.1466 | 1.19 | 10000 | 0.5034 | 0.8922 | | 0.1626 | 1.25 | 10500 | 0.4431 | 0.8945 | | 0.1459 | 1.31 | 11000 | 0.5001 | 0.8922 | | 0.1489 | 1.37 | 11500 | 0.4739 | 0.8968 | | 0.1591 | 1.43 | 12000 | 0.3852 | 0.8945 | | 0.1211 | 1.48 | 12500 | 0.4648 | 0.8945 | | 0.1275 | 1.54 | 13000 | 0.5281 | 0.8956 | | 0.1302 | 1.6 | 13500 | 0.4411 | 0.8933 | | 0.1313 | 1.66 | 14000 | 0.4914 | 0.8979 | | 0.134 | 1.72 | 14500 | 0.3923 | 0.8979 | | 0.1355 | 1.78 | 15000 | 0.4164 | 0.8956 | | 0.1263 | 1.84 | 15500 | 0.4293 | 0.8945 | | 0.1326 | 1.9 | 16000 | 0.4185 | 0.8933 | | 0.1315 | 1.96 | 16500 | 0.4181 | 0.8991 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
{"base_model": "distilbert-base-uncased", "datasets": ["sst2"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_base_SST2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "sst2", "type": "sst2", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8990825688073395, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,048
MatMulMan/araelectra-base-discriminator-89540-pretrain
MatMulMan
question-answering
[ "transformers", "safetensors", "electra", "question-answering", "endpoints_compatible", "region:us" ]
2024-09-23T15:08:58Z
2024-09-23T15:09:24+00:00
6
0
--- library_name: transformers tags: [] --- # araelectra-base-discriminator-89540-pretrain # Quran Passage Retrieval Model This is a **fine-tuned model** on Arabic passage retrieval datasets, used for **Quran QA 2023 Task A**. ## Model Description This model was fine-tuned to perform text classification on an Arabic dataset. The task involves identifying relevant passages from the Quran in response to specific questions, focusing on retrieval quality. - **Base model**: Pretrained transformer-based model (e.g., AraBERT, CAMeLBERT, AraELECTRA). - **Task**: Passage retrieval (text classification). - **Dataset**: Fine-tuned on the Quran QA 2023 dataset. ## Intended Use - **Language**: Arabic - **Task**: Passage retrieval for Quran QA - **Usage**: Use this model for ranking and retrieving relevant passages from a corpus of Arabic text, primarily for question answering tasks. ## Evaluation Results - reported in the paper ## How to Use ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("mohammed-elkomy/quran-qa") tokenizer = AutoTokenizer.from_pretrained("mohammed-elkomy/quran-qa") inputs = tokenizer("Your input text", return_tensors="pt") outputs = model(**inputs) ## Citation If you use this model, please cite the following: ``` @inproceedings{elkomy2023quran, title={TCE at Qur’an QA 2023 Shared Task: Low Resource Enhanced Transformer-based Ensemble Approach for Qur’anic QA}, author={Mohammed ElKomy and Amany Sarhan}, year={2023}, url={https://github.com/mohammed-elkomy/quran-qa/}, } ``` ``` @inproceedings{elkomy2022quran, title={TCE at Qur'an QA 2022: Arabic Language Question Answering Over Holy Qur'an Using a Post-Processed Ensemble of BERT-based Models}, author={Mohammed ElKomy and Amany Sarhan}, year={2022}, url={https://github.com/mohammed-elkomy/quran-qa/}, } ```
null
Non_BioNLP
# araelectra-base-discriminator-89540-pretrain # Quran Passage Retrieval Model This is a **fine-tuned model** on Arabic passage retrieval datasets, used for **Quran QA 2023 Task A**. ## Model Description This model was fine-tuned to perform text classification on an Arabic dataset. The task involves identifying relevant passages from the Quran in response to specific questions, focusing on retrieval quality. - **Base model**: Pretrained transformer-based model (e.g., AraBERT, CAMeLBERT, AraELECTRA). - **Task**: Passage retrieval (text classification). - **Dataset**: Fine-tuned on the Quran QA 2023 dataset. ## Intended Use - **Language**: Arabic - **Task**: Passage retrieval for Quran QA - **Usage**: Use this model for ranking and retrieving relevant passages from a corpus of Arabic text, primarily for question answering tasks. ## Evaluation Results - reported in the paper ## How to Use ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("mohammed-elkomy/quran-qa") tokenizer = AutoTokenizer.from_pretrained("mohammed-elkomy/quran-qa") inputs = tokenizer("Your input text", return_tensors="pt") outputs = model(**inputs) ## Citation If you use this model, please cite the following: ``` @inproceedings{elkomy2023quran, title={TCE at Qur’an QA 2023 Shared Task: Low Resource Enhanced Transformer-based Ensemble Approach for Qur’anic QA}, author={Mohammed ElKomy and Amany Sarhan}, year={2023}, url={https://github.com/mohammed-elkomy/quran-qa/}, } ``` ``` @inproceedings{elkomy2022quran, title={TCE at Qur'an QA 2022: Arabic Language Question Answering Over Holy Qur'an Using a Post-Processed Ensemble of BERT-based Models}, author={Mohammed ElKomy and Amany Sarhan}, year={2022}, url={https://github.com/mohammed-elkomy/quran-qa/}, } ```
{"library_name": "transformers", "tags": []}
task
[ "TEXT_CLASSIFICATION", "QUESTION_ANSWERING" ]
43,049
NghiemAbe/mbart_VietnameseToEnglish
NghiemAbe
text2text-generation
[ "transformers", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "vi", "en", "dataset:NghiemAbe/translation-vietnamese-english", "base_model:facebook/mbart-large-50-many-to-many-mmt", "base_model:finetune:facebook/mbart-large-50-many-to-many-mmt", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-09T12:55:32Z
2023-11-09T16:36:38+00:00
6
0
--- base_model: facebook/mbart-large-50-many-to-many-mmt datasets: - NghiemAbe/translation-vietnamese-english language: - vi - en metrics: - bleu pipeline_tag: text2text-generation tags: - generated_from_trainer model-index: - name: mbart_VietnameseToEnglish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart_VietnameseToEnglish_30k This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3869 - Bleu: 27.9659 - Gen Len: 33.42 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.1033 | 1.0 | 3750 | 1.3869 | 27.9659 | 33.42 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart_VietnameseToEnglish_30k This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3869 - Bleu: 27.9659 - Gen Len: 33.42 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.1033 | 1.0 | 3750 | 1.3869 | 27.9659 | 33.42 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
{"base_model": "facebook/mbart-large-50-many-to-many-mmt", "datasets": ["NghiemAbe/translation-vietnamese-english"], "language": ["vi", "en"], "metrics": ["bleu"], "pipeline_tag": "text2text-generation", "tags": ["generated_from_trainer"], "model-index": [{"name": "mbart_VietnameseToEnglish", "results": []}]}
task
[ "TRANSLATION" ]
43,050
asi/igpt-fr-cased-base
asi
text-to-image
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "tf", "text-to-image", "fr", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-07-26T20:57:33Z
2022-07-27T17:12:36+00:00
371
5
--- language: - fr license: apache-2.0 tags: - tf - pytorch - gpt2 - text-to-image thumbnail: https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png --- <img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/igpt-logo.png" width="400"> ## Model description **iGPT-fr** 🇫🇷 is a GPT model for French pre-trained incremental language model developped by the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We adapted [GPT-fr 🇫🇷](https://huggingface.co/asi/gpt-fr-cased-base) model to generate images conditionned by text inputs. ## Intended uses & limitations The model can be leveraged for image generation tasks. The model is currently under a developpment phase. #### How to use The model might be used through the 🤗 `Transformers` librairie. You will also need to install the `Taming Transformers` library for high-resolution image synthesis: ```bash pip install git+https://github.com/CompVis/taming-transformers.git ``` ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel from huggingface_hub import hf_hub_download from omegaconf import OmegaConf from taming.models import vqgan import torch from PIL import Image import numpy as np # Load VQGAN model vqgan_ckpt = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="model.ckpt", force_download=False) vqgan_config = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="config.yaml", force_download=False) config = OmegaConf.load(vqgan_config) vqgan_model = vqgan.VQModel(**config.model.params) vqgan_model.eval().requires_grad_(False) vqgan_model.init_from_ckpt(vqgan_ckpt) # Load pretrained model model = GPT2LMHeadModel.from_pretrained("asi/igpt-fr-cased-base") model.eval() tokenizer = GPT2Tokenizer.from_pretrained("asi/igpt-fr-cased-base") # Generate a sample of text input_sentence = "Une carte de l'europe" input_ids = tokenizer.encode(input_sentence, return_tensors='pt') input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1) # Add image generation token greedy_output = model.generate( input_ids.to(device), max_length=256+input_ids.shape[1], do_sample=True, top_p=0.92, top_k=0) def custom_to_pil(x): x = x.detach().cpu() x = torch.clamp(x, -1., 1.) x = (x + 1.)/2. x = x.permute(1,2,0).numpy() x = (255*x).astype(np.uint8) x = Image.fromarray(x) if not x.mode == "RGB": x = x.convert("RGB") return x z_idx = greedy_output[0, input_ids.shape[1]:] - 50001 z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256)) x_rec = vqgan_model.decode(z_quant).to('cpu')[0] display(custom_to_pil(x_rec)) ``` You may also filter results based on CLIP: ```python from tqdm import tqdm def hallucinate(prompt, num_images=64): input_ids = tokenizer.encode(prompt, return_tensors='pt') input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1).to(device) # Add image generation token all_images = [] for i in tqdm(range(num_images)): greedy_output = model.generate( input_ids.to(device), max_length=256+input_ids.shape[1], do_sample=True, top_p=0.92, top_k=0) z_idx = greedy_output[0, input_ids.shape[1]:] - 50001 z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256)) x_rec = vqgan_model.decode(z_quant).to('cpu')[0] all_images.append(custom_to_pil(x_rec)) return all_images input_sentence = "Une carte de l'europe" all_images = hallucinate(input_sentence) from transformers import pipeline opus_model = "Helsinki-NLP/opus-mt-fr-en" opus_translator = pipeline("translation", model=opus_model) opus_translator(input_sentence) from transformers import CLIPProcessor, CLIPModel clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") clip_processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") def clip_top_k(prompt, images, k=8): prompt_fr = opus_translator(input_sentence)[0]['translation_text'] inputs = clip_processor(text=prompt_fr, images=images, return_tensors="pt", padding=True) outputs = clip_model(**inputs) logits = outputs.logits_per_text # this is the image-text similarity score scores = np.array(logits[0].detach()).argsort()[-k:][::-1] return [images[score] for score in scores] filtered_images = clip_top_k(input_sentence, all_images) for fi in filtered_images: display(fi) ``` ## Training data We created a dedicated corpus to train our generative model. The training corpus consists in text-image pairs. We aggregated portions from existing corpora: [Laion-5B](https://laion.ai/blog/laion-5b/) and [WIT](https://github.com/google-research-datasets/wit). The final dataset includes 10,807,534 samples. ## Training procedure We pre-trained the model on the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/) supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 8 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 1161.22 kgCO2eq, using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al., (2019)](lacoste-2019).
null
Non_BioNLP
<img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/igpt-logo.png" width="400"> ## Model description **iGPT-fr** 🇫🇷 is a GPT model for French pre-trained incremental language model developped by the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We adapted [GPT-fr 🇫🇷](https://huggingface.co/asi/gpt-fr-cased-base) model to generate images conditionned by text inputs. ## Intended uses & limitations The model can be leveraged for image generation tasks. The model is currently under a developpment phase. #### How to use The model might be used through the 🤗 `Transformers` librairie. You will also need to install the `Taming Transformers` library for high-resolution image synthesis: ```bash pip install git+https://github.com/CompVis/taming-transformers.git ``` ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel from huggingface_hub import hf_hub_download from omegaconf import OmegaConf from taming.models import vqgan import torch from PIL import Image import numpy as np # Load VQGAN model vqgan_ckpt = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="model.ckpt", force_download=False) vqgan_config = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="config.yaml", force_download=False) config = OmegaConf.load(vqgan_config) vqgan_model = vqgan.VQModel(**config.model.params) vqgan_model.eval().requires_grad_(False) vqgan_model.init_from_ckpt(vqgan_ckpt) # Load pretrained model model = GPT2LMHeadModel.from_pretrained("asi/igpt-fr-cased-base") model.eval() tokenizer = GPT2Tokenizer.from_pretrained("asi/igpt-fr-cased-base") # Generate a sample of text input_sentence = "Une carte de l'europe" input_ids = tokenizer.encode(input_sentence, return_tensors='pt') input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1) # Add image generation token greedy_output = model.generate( input_ids.to(device), max_length=256+input_ids.shape[1], do_sample=True, top_p=0.92, top_k=0) def custom_to_pil(x): x = x.detach().cpu() x = torch.clamp(x, -1., 1.) x = (x + 1.)/2. x = x.permute(1,2,0).numpy() x = (255*x).astype(np.uint8) x = Image.fromarray(x) if not x.mode == "RGB": x = x.convert("RGB") return x z_idx = greedy_output[0, input_ids.shape[1]:] - 50001 z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256)) x_rec = vqgan_model.decode(z_quant).to('cpu')[0] display(custom_to_pil(x_rec)) ``` You may also filter results based on CLIP: ```python from tqdm import tqdm def hallucinate(prompt, num_images=64): input_ids = tokenizer.encode(prompt, return_tensors='pt') input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1).to(device) # Add image generation token all_images = [] for i in tqdm(range(num_images)): greedy_output = model.generate( input_ids.to(device), max_length=256+input_ids.shape[1], do_sample=True, top_p=0.92, top_k=0) z_idx = greedy_output[0, input_ids.shape[1]:] - 50001 z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256)) x_rec = vqgan_model.decode(z_quant).to('cpu')[0] all_images.append(custom_to_pil(x_rec)) return all_images input_sentence = "Une carte de l'europe" all_images = hallucinate(input_sentence) from transformers import pipeline opus_model = "Helsinki-NLP/opus-mt-fr-en" opus_translator = pipeline("translation", model=opus_model) opus_translator(input_sentence) from transformers import CLIPProcessor, CLIPModel clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") clip_processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") def clip_top_k(prompt, images, k=8): prompt_fr = opus_translator(input_sentence)[0]['translation_text'] inputs = clip_processor(text=prompt_fr, images=images, return_tensors="pt", padding=True) outputs = clip_model(**inputs) logits = outputs.logits_per_text # this is the image-text similarity score scores = np.array(logits[0].detach()).argsort()[-k:][::-1] return [images[score] for score in scores] filtered_images = clip_top_k(input_sentence, all_images) for fi in filtered_images: display(fi) ``` ## Training data We created a dedicated corpus to train our generative model. The training corpus consists in text-image pairs. We aggregated portions from existing corpora: [Laion-5B](https://laion.ai/blog/laion-5b/) and [WIT](https://github.com/google-research-datasets/wit). The final dataset includes 10,807,534 samples. ## Training procedure We pre-trained the model on the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/) supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 8 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 1161.22 kgCO2eq, using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al., (2019)](lacoste-2019).
{"language": ["fr"], "license": "apache-2.0", "tags": ["tf", "pytorch", "gpt2", "text-to-image"], "thumbnail": "https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png"}
task
[ "TRANSLATION" ]
43,051
marcatanante1/distilbert-base-uncased-finetuned-cola
marcatanante1
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-03T16:25:06Z
2023-03-05T16:27:55+00:00
16
0
--- datasets: - glue license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - type: matthews_correlation value: 0.3549354045919057 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8744 - Matthews Correlation: 0.3549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.705152204400204e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 39 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 107 | 0.5696 | 0.1474 | | No log | 2.0 | 214 | 0.5700 | 0.2981 | | No log | 3.0 | 321 | 0.6856 | 0.3437 | | No log | 4.0 | 428 | 0.8744 | 0.3549 | | 0.3495 | 5.0 | 535 | 0.9641 | 0.3295 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8744 - Matthews Correlation: 0.3549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.705152204400204e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 39 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 107 | 0.5696 | 0.1474 | | No log | 2.0 | 214 | 0.5700 | 0.2981 | | No log | 3.0 | 321 | 0.6856 | 0.3437 | | No log | 4.0 | 428 | 0.8744 | 0.3549 | | 0.3495 | 5.0 | 535 | 0.9641 | 0.3295 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.3549354045919057, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,052
fine-tuned/scidocs-c-64-24-gpt-4o-2024-05-133652
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Paper", "Stationery", "Office", "Crafts", "Printing", "custom_code", "en", "dataset:fine-tuned/scidocs-c-64-24-gpt-4o-2024-05-133652", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-16T12:52:14Z
2024-05-16T12:52:27+00:00
9
0
--- datasets: - fine-tuned/scidocs-c-64-24-gpt-4o-2024-05-133652 - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Paper - Stationery - Office - Crafts - Printing --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: general search for paper products ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/scidocs-c-64-24-gpt-4o-2024-05-133652', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: general search for paper products ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/scidocs-c-64-24-gpt-4o-2024-05-133652', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/scidocs-c-64-24-gpt-4o-2024-05-133652", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Paper", "Stationery", "Office", "Crafts", "Printing"]}
task
[ "TEXT_CLASSIFICATION" ]
43,053
jkhan447/sentiment-model-sample-27go-emotion
jkhan447
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:go_emotions", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-28T06:05:25Z
2022-04-01T08:13:56+00:00
119
1
--- datasets: - go_emotions license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: sentiment-model-sample-27go-emotion results: - task: type: text-classification name: Text Classification dataset: name: go_emotions type: go_emotions args: simplified metrics: - type: accuracy value: 0.5888888888888889 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-model-sample-27go-emotion This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the go_emotions dataset. It achieves the following results on the evaluation set: - Loss: 4.1765 - Accuracy: 0.5889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-model-sample-27go-emotion This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the go_emotions dataset. It achieves the following results on the evaluation set: - Loss: 4.1765 - Accuracy: 0.5889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
{"datasets": ["go_emotions"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "sentiment-model-sample-27go-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "go_emotions", "type": "go_emotions", "args": "simplified"}, "metrics": [{"type": "accuracy", "value": 0.5888888888888889, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,054
poltextlab/xlm-roberta-large-french-legislative-cap-v3
poltextlab
text-classification
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-09-07T16:08:03Z
2025-02-26T16:05:54+00:00
0
0
--- language: - fr metrics: - accuracy - f1-score tags: - text-classification - pytorch extra_gated_prompt: 'Our models are intended for academic use only. If you are not affiliated with an academic institution, please provide a rationale for using our models. Please allow us a few business days to manually review subscriptions. If you use our models for your work or research, please cite this paper: Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434' extra_gated_fields: Name: text Country: country Institution: text Institution Email: text Please specify your academic use case: text --- # xlm-roberta-large-french-legislative-cap-v3 ## Model description An `xlm-roberta-large` model fine-tuned on french training data containing legislative documents (bills, laws, motions, legislative decrees, hearings, resolutions) labeled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/). We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes. ## How to use the model ```python from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large") pipe = pipeline( model="poltextlab/xlm-roberta-large-french-legislative-cap-v3", task="text-classification", tokenizer=tokenizer, use_fast=False, token="<your_hf_read_only_token>" ) text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities." pipe(text) ``` The translation table from the model results to CAP codes is the following: ```python CAP_NUM_DICT = { 0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 12, 11: 13, 12: 14, 13: 15, 14: 16, 15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 23, 21: 999, } ``` We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases. ### Gated access Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead. ## Model performance The model was evaluated on a test set of 1211 examples.<br> Model accuracy is **0.85**. | label | precision | recall | f1-score | support | |:-------------|------------:|---------:|-----------:|----------:| | 0 | 0.87 | 0.82 | 0.84 | 82 | | 1 | 0.67 | 0.62 | 0.64 | 26 | | 2 | 0.85 | 0.92 | 0.89 | 38 | | 3 | 0.84 | 0.95 | 0.89 | 40 | | 4 | 0.77 | 0.84 | 0.8 | 44 | | 5 | 0.74 | 0.95 | 0.83 | 21 | | 6 | 0.47 | 0.53 | 0.5 | 17 | | 7 | 0.74 | 1 | 0.85 | 17 | | 8 | 0.81 | 0.94 | 0.87 | 31 | | 9 | 0.9 | 0.94 | 0.92 | 78 | | 10 | 0.77 | 0.87 | 0.82 | 100 | | 11 | 0.8 | 0.82 | 0.81 | 34 | | 12 | 0.82 | 0.86 | 0.84 | 37 | | 13 | 0.85 | 0.82 | 0.84 | 85 | | 14 | 0.93 | 0.83 | 0.88 | 47 | | 15 | 0.86 | 0.82 | 0.84 | 39 | | 16 | 0.97 | 0.77 | 0.86 | 47 | | 17 | 0.85 | 0.81 | 0.83 | 141 | | 18 | 0.92 | 0.89 | 0.9 | 248 | | 19 | 0 | 0 | 0 | 4 | | 20 | 0.91 | 0.83 | 0.87 | 35 | | macro avg | 0.78 | 0.8 | 0.79 | 1211 | | weighted avg | 0.85 | 0.85 | 0.85 | 1211 | ### Fine-tuning procedure This model was fine-tuned with the following key hyperparameters: - **Number of Training Epochs**: 10 - **Batch Size**: 8 - **Learning Rate**: 5e-06 - **Early Stopping**: enabled with a patience of 2 epochs ## Inference platform This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research. ## Cooperation Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com). ## Reference Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434 ## Debugging and issues This architecture uses the `sentencepiece` tokenizer. In order to use the model before `transformers==4.27` you need to install it manually. If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
null
Non_BioNLP
# xlm-roberta-large-french-legislative-cap-v3 ## Model description An `xlm-roberta-large` model fine-tuned on french training data containing legislative documents (bills, laws, motions, legislative decrees, hearings, resolutions) labeled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/). We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes. ## How to use the model ```python from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large") pipe = pipeline( model="poltextlab/xlm-roberta-large-french-legislative-cap-v3", task="text-classification", tokenizer=tokenizer, use_fast=False, token="<your_hf_read_only_token>" ) text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities." pipe(text) ``` The translation table from the model results to CAP codes is the following: ```python CAP_NUM_DICT = { 0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 12, 11: 13, 12: 14, 13: 15, 14: 16, 15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 23, 21: 999, } ``` We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases. ### Gated access Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead. ## Model performance The model was evaluated on a test set of 1211 examples.<br> Model accuracy is **0.85**. | label | precision | recall | f1-score | support | |:-------------|------------:|---------:|-----------:|----------:| | 0 | 0.87 | 0.82 | 0.84 | 82 | | 1 | 0.67 | 0.62 | 0.64 | 26 | | 2 | 0.85 | 0.92 | 0.89 | 38 | | 3 | 0.84 | 0.95 | 0.89 | 40 | | 4 | 0.77 | 0.84 | 0.8 | 44 | | 5 | 0.74 | 0.95 | 0.83 | 21 | | 6 | 0.47 | 0.53 | 0.5 | 17 | | 7 | 0.74 | 1 | 0.85 | 17 | | 8 | 0.81 | 0.94 | 0.87 | 31 | | 9 | 0.9 | 0.94 | 0.92 | 78 | | 10 | 0.77 | 0.87 | 0.82 | 100 | | 11 | 0.8 | 0.82 | 0.81 | 34 | | 12 | 0.82 | 0.86 | 0.84 | 37 | | 13 | 0.85 | 0.82 | 0.84 | 85 | | 14 | 0.93 | 0.83 | 0.88 | 47 | | 15 | 0.86 | 0.82 | 0.84 | 39 | | 16 | 0.97 | 0.77 | 0.86 | 47 | | 17 | 0.85 | 0.81 | 0.83 | 141 | | 18 | 0.92 | 0.89 | 0.9 | 248 | | 19 | 0 | 0 | 0 | 4 | | 20 | 0.91 | 0.83 | 0.87 | 35 | | macro avg | 0.78 | 0.8 | 0.79 | 1211 | | weighted avg | 0.85 | 0.85 | 0.85 | 1211 | ### Fine-tuning procedure This model was fine-tuned with the following key hyperparameters: - **Number of Training Epochs**: 10 - **Batch Size**: 8 - **Learning Rate**: 5e-06 - **Early Stopping**: enabled with a patience of 2 epochs ## Inference platform This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research. ## Cooperation Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com). ## Reference Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434 ## Debugging and issues This architecture uses the `sentencepiece` tokenizer. In order to use the model before `transformers==4.27` you need to install it manually. If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
{"language": ["fr"], "metrics": ["accuracy", "f1-score"], "tags": ["text-classification", "pytorch"], "extra_gated_prompt": "Our models are intended for academic use only. If you are not affiliated with an academic institution, please provide a rationale for using our models. Please allow us a few business days to manually review subscriptions.\nIf you use our models for your work or research, please cite this paper: Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434", "extra_gated_fields": {"Name": "text", "Country": "country", "Institution": "text", "Institution Email": "text", "Please specify your academic use case": "text"}}
task
[ "TRANSLATION" ]
43,055
kuljeet98/productOne-bert
kuljeet98
sentence-similarity
[ "sentence-transformers", "pytorch", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:8408", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-07-18T03:28:23Z
2024-07-18T03:31:22+00:00
9
0
--- base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 datasets: [] language: [] library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:8408 - loss:CosineSimilarityLoss widget: - source_sentence: president sentences: - assistante de banque priv e banco santander rio - worldwide executive vice president corindus a siemens healthineers company - soporte t cnico superior - source_sentence: chief business strategy officer sentences: - sub jefe - analista senior recursos humanos sales staff and logistics - subgerente sostenibilidad y hseq - source_sentence: gerente de planificación sentences: - analista de soporte web - director - gestion calidad - source_sentence: global human resources leader sentences: - director manufacturing engineering - quality specialist - asesoramiento para comprar inmuebles en uruguay paraguay espa a y usa - source_sentence: commercial manager sentences: - jefe de turno planta envasado de vinos - gerente de operaciones - vice president of finance americas --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'commercial manager', 'gerente de operaciones', 'vice president of finance americas', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 8,408 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 3 tokens</li><li>mean: 6.2 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.75 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.06</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:----------------------------------------|:------------------------------------------------------------------------------|:-----------------| | <code>strategic planning manager</code> | <code>senior brand manager uap southern cone & personal care cdm chile</code> | <code>0.0</code> | | <code>director de planificacion</code> | <code>key account manager tiendas paris</code> | <code>0.0</code> | | <code>gerente general</code> | <code>analista de cobranza</code> | <code>0.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 50 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 50 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:-------:|:-----:|:-------------:| | 0.9506 | 500 | 0.0434 | | 1.9011 | 1000 | 0.0135 | | 2.8517 | 1500 | 0.0072 | | 3.8023 | 2000 | 0.0056 | | 4.7529 | 2500 | 0.0044 | | 5.7034 | 3000 | 0.0038 | | 6.6540 | 3500 | 0.0034 | | 7.6046 | 4000 | 0.0032 | | 8.5551 | 4500 | 0.0029 | | 9.5057 | 5000 | 0.0028 | | 10.4563 | 5500 | 0.0026 | | 11.4068 | 6000 | 0.0025 | | 12.3574 | 6500 | 0.0026 | | 13.3080 | 7000 | 0.0023 | | 14.2586 | 7500 | 0.0023 | | 15.2091 | 8000 | 0.0023 | | 16.1597 | 8500 | 0.0022 | | 17.1103 | 9000 | 0.0021 | | 18.0608 | 9500 | 0.0019 | | 19.0114 | 10000 | 0.0021 | | 19.9620 | 10500 | 0.0019 | | 20.9125 | 11000 | 0.0019 | | 21.8631 | 11500 | 0.0016 | | 22.8137 | 12000 | 0.0018 | | 23.7643 | 12500 | 0.0018 | | 24.7148 | 13000 | 0.0018 | | 25.6654 | 13500 | 0.0016 | | 26.6160 | 14000 | 0.0017 | | 27.5665 | 14500 | 0.0016 | | 28.5171 | 15000 | 0.0016 | | 29.4677 | 15500 | 0.0016 | | 30.4183 | 16000 | 0.0016 | | 31.3688 | 16500 | 0.0019 | | 32.3194 | 17000 | 0.0018 | | 33.2700 | 17500 | 0.0017 | | 34.2205 | 18000 | 0.0016 | | 35.1711 | 18500 | 0.0016 | | 36.1217 | 19000 | 0.0016 | | 37.0722 | 19500 | 0.0015 | | 38.0228 | 20000 | 0.0012 | | 38.9734 | 20500 | 0.0015 | | 39.9240 | 21000 | 0.0015 | | 40.8745 | 21500 | 0.0013 | | 41.8251 | 22000 | 0.0014 | | 42.7757 | 22500 | 0.0014 | | 43.7262 | 23000 | 0.0014 | | 44.6768 | 23500 | 0.0013 | | 45.6274 | 24000 | 0.0012 | | 46.5779 | 24500 | 0.0014 | | 47.5285 | 25000 | 0.0012 | | 48.4791 | 25500 | 0.0013 | | 49.4297 | 26000 | 0.0013 | ### Framework Versions - Python: 3.8.5 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.1+cu121 - Accelerate: 0.32.1 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'commercial manager', 'gerente de operaciones', 'vice president of finance americas', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 8,408 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 3 tokens</li><li>mean: 6.2 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.75 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.06</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:----------------------------------------|:------------------------------------------------------------------------------|:-----------------| | <code>strategic planning manager</code> | <code>senior brand manager uap southern cone & personal care cdm chile</code> | <code>0.0</code> | | <code>director de planificacion</code> | <code>key account manager tiendas paris</code> | <code>0.0</code> | | <code>gerente general</code> | <code>analista de cobranza</code> | <code>0.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 50 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 50 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:-------:|:-----:|:-------------:| | 0.9506 | 500 | 0.0434 | | 1.9011 | 1000 | 0.0135 | | 2.8517 | 1500 | 0.0072 | | 3.8023 | 2000 | 0.0056 | | 4.7529 | 2500 | 0.0044 | | 5.7034 | 3000 | 0.0038 | | 6.6540 | 3500 | 0.0034 | | 7.6046 | 4000 | 0.0032 | | 8.5551 | 4500 | 0.0029 | | 9.5057 | 5000 | 0.0028 | | 10.4563 | 5500 | 0.0026 | | 11.4068 | 6000 | 0.0025 | | 12.3574 | 6500 | 0.0026 | | 13.3080 | 7000 | 0.0023 | | 14.2586 | 7500 | 0.0023 | | 15.2091 | 8000 | 0.0023 | | 16.1597 | 8500 | 0.0022 | | 17.1103 | 9000 | 0.0021 | | 18.0608 | 9500 | 0.0019 | | 19.0114 | 10000 | 0.0021 | | 19.9620 | 10500 | 0.0019 | | 20.9125 | 11000 | 0.0019 | | 21.8631 | 11500 | 0.0016 | | 22.8137 | 12000 | 0.0018 | | 23.7643 | 12500 | 0.0018 | | 24.7148 | 13000 | 0.0018 | | 25.6654 | 13500 | 0.0016 | | 26.6160 | 14000 | 0.0017 | | 27.5665 | 14500 | 0.0016 | | 28.5171 | 15000 | 0.0016 | | 29.4677 | 15500 | 0.0016 | | 30.4183 | 16000 | 0.0016 | | 31.3688 | 16500 | 0.0019 | | 32.3194 | 17000 | 0.0018 | | 33.2700 | 17500 | 0.0017 | | 34.2205 | 18000 | 0.0016 | | 35.1711 | 18500 | 0.0016 | | 36.1217 | 19000 | 0.0016 | | 37.0722 | 19500 | 0.0015 | | 38.0228 | 20000 | 0.0012 | | 38.9734 | 20500 | 0.0015 | | 39.9240 | 21000 | 0.0015 | | 40.8745 | 21500 | 0.0013 | | 41.8251 | 22000 | 0.0014 | | 42.7757 | 22500 | 0.0014 | | 43.7262 | 23000 | 0.0014 | | 44.6768 | 23500 | 0.0013 | | 45.6274 | 24000 | 0.0012 | | 46.5779 | 24500 | 0.0014 | | 47.5285 | 25000 | 0.0012 | | 48.4791 | 25500 | 0.0013 | | 49.4297 | 26000 | 0.0013 | ### Framework Versions - Python: 3.8.5 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.1+cu121 - Accelerate: 0.32.1 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:8408", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "president", "sentences": ["assistante de banque priv e banco santander rio", "worldwide executive vice president corindus a siemens healthineers company", "soporte t cnico superior"]}, {"source_sentence": "chief business strategy officer", "sentences": ["sub jefe", "analista senior recursos humanos sales staff and logistics", "subgerente sostenibilidad y hseq"]}, {"source_sentence": "gerente de planificación", "sentences": ["analista de soporte web", "director", "gestion calidad"]}, {"source_sentence": "global human resources leader", "sentences": ["director manufacturing engineering", "quality specialist", "asesoramiento para comprar inmuebles en uruguay paraguay espa a y usa"]}, {"source_sentence": "commercial manager", "sentences": ["jefe de turno planta envasado de vinos", "gerente de operaciones", "vice president of finance americas"]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,056
Triangle104/Unsloth_Llama-3.2-3B-Instruct-Q5_K_M-GGUF
Triangle104
null
[ "transformers", "gguf", "llama-3", "llama", "meta", "facebook", "unsloth", "llama-cpp", "gguf-my-repo", "en", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:quantized:unsloth/Llama-3.2-3B-Instruct", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
2024-11-11T01:44:06Z
2025-02-17T12:05:40+00:00
17
0
--- base_model: unsloth/Llama-3.2-3B-Instruct language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers - llama-cpp - gguf-my-repo --- # Triangle104/Unsloth_Llama-3.2-3B-Instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`unsloth/Llama-3.2-3B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) for more details on the model. --- Model details: - Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. Model developer: Meta Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. Model Release Date: Sept 25, 2024 Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -c 2048 ```
null
Non_BioNLP
# Triangle104/Unsloth_Llama-3.2-3B-Instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`unsloth/Llama-3.2-3B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) for more details on the model. --- Model details: - Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. Model developer: Meta Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. Model Release Date: Sept 25, 2024 Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -c 2048 ```
{"base_model": "unsloth/Llama-3.2-3B-Instruct", "language": ["en"], "library_name": "transformers", "license": "llama3.2", "tags": ["llama-3", "llama", "meta", "facebook", "unsloth", "transformers", "llama-cpp", "gguf-my-repo"]}
task
[ "SUMMARIZATION" ]
43,057
jmbt22/marian-finetuned-opus-mt-en-tl
jmbt22
translation
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:tatoeba", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-15T05:50:58Z
2023-03-15T06:06:16+00:00
11
0
--- datasets: - tatoeba license: apache-2.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-opus-mt-en-tl results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: tatoeba type: tatoeba config: en-tl split: train args: en-tl metrics: - type: bleu value: 35.9113771495936 name: Bleu --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-opus-mt-en-tl This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-tl](https://huggingface.co/Helsinki-NLP/opus-mt-en-tl) on the tatoeba dataset. It achieves the following results on the evaluation set: - Loss: 1.2611 - Bleu: 35.9114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.0 - Datasets 2.10.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-opus-mt-en-tl This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-tl](https://huggingface.co/Helsinki-NLP/opus-mt-en-tl) on the tatoeba dataset. It achieves the following results on the evaluation set: - Loss: 1.2611 - Bleu: 35.9114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.0 - Datasets 2.10.1 - Tokenizers 0.13.2
{"datasets": ["tatoeba"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-opus-mt-en-tl", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "tatoeba", "type": "tatoeba", "config": "en-tl", "split": "train", "args": "en-tl"}, "metrics": [{"type": "bleu", "value": 35.9113771495936, "name": "Bleu"}]}]}]}
task
[ "TRANSLATION" ]
43,058
long292/apply_back_translation_model_v5_2k
long292
text2text-generation
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "base_model:vinai/bartpho-syllable-base", "base_model:finetune:vinai/bartpho-syllable-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-18T22:07:42Z
2024-05-18T22:07:57+00:00
7
0
--- base_model: vinai/bartpho-syllable-base metrics: - bleu tags: - generated_from_trainer model-index: - name: apply_back_translation_model_v5_2k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # apply_back_translation_model_v5_2k This model is a fine-tuned version of [vinai/bartpho-syllable-base](https://huggingface.co/vinai/bartpho-syllable-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8175 - Bleu: 9.3599 - Gen Len: 17.9607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.9856 | 1.0 | 11627 | 1.8175 | 9.3599 | 17.9607 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # apply_back_translation_model_v5_2k This model is a fine-tuned version of [vinai/bartpho-syllable-base](https://huggingface.co/vinai/bartpho-syllable-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8175 - Bleu: 9.3599 - Gen Len: 17.9607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.9856 | 1.0 | 11627 | 1.8175 | 9.3599 | 17.9607 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"base_model": "vinai/bartpho-syllable-base", "metrics": ["bleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "apply_back_translation_model_v5_2k", "results": []}]}
task
[ "TRANSLATION" ]
43,060
Quake24/easyTermsSummerizer
Quake24
summarization
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "summarization", "generated_from_trainer", "en", "dataset:Quake24/paraphrasedPayPal", "dataset:Quake24/paraphrasedTwitter", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-04-22T11:05:20Z
2023-04-22T11:15:07+00:00
31
1
--- datasets: - Quake24/paraphrasedPayPal - Quake24/paraphrasedTwitter language: - en library_name: transformers license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: easyTermsSummerizer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # easyTermsSummerizer This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8124 - Rouge1: 0.7533 - Rouge2: 0.6964 - Rougel: 0.6806 - Rougelsum: 0.6793 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 2 | 2.2083 | 0.7332 | 0.6595 | 0.6374 | 0.6376 | | No log | 2.0 | 4 | 1.9331 | 0.7776 | 0.7268 | 0.6991 | 0.7005 | | No log | 3.0 | 6 | 1.8124 | 0.7533 | 0.6964 | 0.6806 | 0.6793 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # easyTermsSummerizer This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8124 - Rouge1: 0.7533 - Rouge2: 0.6964 - Rougel: 0.6806 - Rougelsum: 0.6793 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 2 | 2.2083 | 0.7332 | 0.6595 | 0.6374 | 0.6376 | | No log | 2.0 | 4 | 1.9331 | 0.7776 | 0.7268 | 0.6991 | 0.7005 | | No log | 3.0 | 6 | 1.8124 | 0.7533 | 0.6964 | 0.6806 | 0.6793 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
{"datasets": ["Quake24/paraphrasedPayPal", "Quake24/paraphrasedTwitter"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "easyTermsSummerizer", "results": []}]}
task
[ "SUMMARIZATION" ]
43,061